- Tags:: #📚Books, #✒️SummarizedBooks, [[Demand forecasting]] - Author:: [[Michael Gilliland]] - Liked:: #3/5 - Link:: [The Business Forecasting Deal | Wiley Online Books](https://onlinelibrary.wiley.com/doi/book/10.1002/9781119199885) - Source date:: [[2012-01-02]] - Read date:: [[2021-11-01]] - Cover:: ![[cover_business_forecasting_deal.png|100]] I read this book while tackling [[Demand forecasting]] at [[Mercadona Tech]]. The cover is a magician! That already tells you A LOT. That, and the starting quote: >Many years ago, when I was young, I got promoted into a job that involved sales forecasting. When I told a friend of mine about it, his reply was, “ Gee, that’s too bad. What did you do wrong? ” ^3ed311 It is quite amazing that the book has almost zero content on actual forecasting techniques, because it really makes a lot of sense: they are not as important as the issues described here (but a good complement could be [[📖 Data Science for Supply Chain Forecasts, 2nd edition]], a book focused on techniques, and the one that led me to this one). There is a marvelous paragraph in the book that cleanly summarizes the advice of the author: >Just because forecasting is giving you trouble doesn’t mean you can solve it by piling on the process, adding more steps and more data and more people. In fact, just the reverse will often work better, and you should seek *economy of process*. **If there is one message to derive from this book, it is that the focus should be on process efficiency and not just forecast accuracy. Accuracy is largely determined by the nature of the pattern we are trying to forecast, and sometimes the surest way to get better forecasts is to just make the demand forecastable** (p. 69). The book could have been shorter, but it has been interesting nonetheless. I have a couple books more from the same author that are more up to date, and that, in reality, are just a curated collection of papers on the topic. But hey, who can read everything they want to? * [Business Forecasting: Practical Problems and Solutions | Wiley](https://www.wiley.com/en-sg/Business+Forecasting:+Practical+Problems+and+Solutions-p-9781119224563) (2015) * [Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning | Wiley](https://www.wiley.com/en-sg/Business+Forecasting:+The+Emerging+Role+of+Artificial+Intelligence+and+Machine+Learning-p-9781119782476) (2021) ## Prologue Starts really strong: >**Forecasting is a huge waste of management time** (…) The amount of time, money, and human effort spent on forecasting is not commensurate with the amount of benefit achieved (the improvement in accuracy). # 1. Fundamental Issues in Business Forecasting ### The Problem of Induction There is no real proof that the future will always behave like the past: >[[Hume]] observed that when we eat a piece of bread, it nourishes us. So we extrapolate our finite particular experiences to a general belief that bread nourishes (…) What justification is there for this belief that bread nourishes? (p.5). ### The Realities of Business Forecasting We are limited by the inherent randomness of the real process: >In any complex business or social system (including things like the buying behavior of customers), there remains an element of randomness. Even though we know the underlying structure and model the behavior correctly, our **forecast accuracy will still be limited by the amount of randomness, and no further improvement in accuracy will be possible** (p. 7). Imagine we want to predict the result of tossing 10, 100, and 1000 fair coins per day. We already know the underlying mechanism: over a large number of trails, on average we will have 50%. However, the variation of the three experiments will not be the same, and that will impose a limit on the achievable accuracy: ![[Pasted image 20220102102329.png]] ### What is Demand? >When we refer to demand, we usually mean **unconstrained or true demand** because we take no consideration of our ability to fulfill it. (Note: I will treat demand, unconstrained demand, and true demand as synonyms.) We use **constrained demand** to describe how much of true demand can be fulfilled (after incorporating any limitations on our ability to provide the product or service demanded). Thus, **constrained demand ≤ demand.** (p. 10). You don't get to see it: >few organizations service their customers perfectly. As such, orders are not a perfect reflection of true demand. This is because when the order fill rate is less than 100%, orders are subject to all kinds of gamesmanship (p. 11). For e-commerce, we have some hints. E.g., we can infer if a user wanted a product (because she searched for it) or whether she didn't ended up shopping because of unavailability of delivery slots (because she drops in the conversion funnel when shown the availability), but still this is still far from a perfect observation: it is still something we need to model. The most dangerous effect is chronic shortages, as we have in Mercadona Online on the delivery slots because of the order limits we impose for operational reasons: >not only contaminates the use of orders to reflect true demand, but it can also cause significant financial harm to your business (…) Customers may truly want your product (so there is real demand), but it won’t be reflected in your historical data because no orders were placed (p. 12). The author advises that we should not put much effort in correcting this: >Contrast imperfections in our true demand history with our real - life forecast errors, which are often 25%, 50%, or even much more. The fact that the demand history on which we build our unconstrained forecasts is not perfect (but may be off by a few percentage points from true demand) is inconsequential compared to the magnitude of the forecast error. **The takeaway is this: Making heroic efforts to capture a perfect history of true demand is unlikely to result in significantly improved forecasts and is probably not worth the effort** (p. 13). However - *and this note is mine*- the problem is not really the imperfection effect size but the fact that it is systematically biased: our observations will always be less than the true demand. If we don't compensate this somehow, we will be systematically pessimitic, as explained in [[Models of the Spiral-Down Effect in Revenue Management]]. ### Constrained Forecast Unconstrained forecast ->Constrained forecast >While the planning process begins with an unconstrained forecast, supply constraints are identified and communicated to the organization through a sales and operations planning (S&OP) or similar process (p. 14). For S&OP, there is a reference here that I've seen in some other places: [Amazon.com: Sales and Operations Planning The How-To Handbook: 9780997887723: Wallace, Thomas F., Stahl, Robert A: Libros](https://www.amazon.com/Sales-Operations-Planning-How-Handbook/dp/0997887729) **You can't measure the truth either!** >Because of the unobservability of true demand, we cannot with certainty measure the performance of our unconstrained forecast. At best, we can measure its accuracy versus our proxy for true demand (p. 14). ### Demand Volatility Coefficient of variation (CV) CV = standard deviation/mean >there tends to be a strong inverse relationship between the volatility of a demand pattern, and our ability to forecast it accurately (p. 15). That matches the conclusions about randomness in [[📜 ‘Horses for Courses’ in demand forecasting]]. But the key thing, as suggested in much of the literature about forecasting (e.g., [[📜 Demand management E-fulfillment]]): >volatility analysis suggests a novel way to improve forecasting performance — by **smoothing demand!** ### Evils of volatility There are many usual practices that work against you (I'm glad we don't at Mercadona Online): >Many organizational practices (such as promotional activities, sales contests, and the quarter end push) only serve to increase volatility. Typical practices are designed to produce record sales weeks rather than promote smooth, consistent, and more profitable growth. Record sales weeks can even result in less revenue (e.g., costs of the promotion, drops in sales below the average after the push weeks...) > Rather than creating costly incentives to spike demand, it may make more sense to design incentives that smooth demand. Smooth and stable growth can be managed profitably. Growth via boom and bust cycles is not necessarily best for an individual business, or for the economy as a whole (…) One of the surest and cheapest ways to get better forecasts is to simply make the demand forecastable (p. 22). ### Evaluating forecast performance Take a good look at the bias: >An important, yet sometimes overlooked metric, is the bias in the forecast. Bias indicates whether the forecast is chronically too high or too low, and is often a sign of political infl uences that are contaminating the forecasting process. When forecasts are unbiased, an organization can operate effectively with less forecast accuracy because the positive and negative errors are balanced over time, tending to cancel each other out (p. 23). Unfortunately, not all unbiased forecasts allow errors to cancel each other out: imagine forecasting your daily workforce... Also, you would want to compare with a simple forecast: >The fundamental standard for comparison is with a naïve forecast — something simple and easy to compute with minimal effort. Common examples of naïve models include the random walk (using the last known actual as your future forecast), a seasonal random walk (such as using the corresponding period from last year as your forecast for this year), or a moving average (p. 23). ### Embarking on improvement Never satisfied: >Our forecasts **never seem to be as accurate as we would like them to be, or need them to be**. As a result, there is a strong temptation to throw money at the problem in hopes of making it go away (...) only to end up with the same old lousy forecasts (p. 24). ## 2. Worst Practices in Business Forecasting: Part 1. Typical ML advice. ### Careful with overfitting ### Don't confuse model fit with forecast accuracy ### Accuracy expectations >Management certainly has expectations for forecast accuracy, and these are often expressed in performance goals, such as, MAPE must be less than 20%. But is there any basis for these kinds of objectives, and what accuracy is reasonable to expect? (p. 43). >**As pathetic as it may sound, perhaps the only reasonable expectation for forecasting performance is to beat a naïve model (or at least do no worse), and to continuously improve the process** (p. 46). ### Outliers >In cases where you think you know the cause of the outlier, another method is to separate the regular history from the outlier history. For example, if big spikes in demand are caused periodically by huge special orders from one particular customer, you could split the data into separate demand histories for the regular customers and for the one special customer with the huge orders (p. 52). ## 3. Worst Practices in Business Forecasting: Part 2 (processes and practices) ### Politics of Forecasting >Differences between plans and forecasts tend to cause organizational consternation, but exposing these gaps should really be viewed as a blessing. When the marketplace is saying that you are selling less (or more) than the original plan, **this gives you a chance to take action** (p. 59). ### Overinvesting in the forecasting function ![[Pasted image 20220109093857.png]] >Elaborate forecasting processes will often backfire because they permit too much participation — **too many touch points where individuals can add their biases** and personal agendas (p. 68). ### Gaming the metrics #### Aggregating results >in aggregation [over time, over product families] the positive and negative errors of the individual item forecasts tend to cancel each other (...) When you aggregate forecasts and actuals before computing the error, you are actually measuring bias (p. 70). ### Denominator management Awesome term! >While MAPE normally uses Actuals as the denominator, some companies instead use Forecasts as the denominator of their error calculations. This can create a subtle bias to overforecast (F > A) because of asymmetry in the metric — the APE is less for each unit of overforecasting than it is for each unit of underforecasting, as illustrated in this table: ![[Pasted image 20220109094749.png]] ### Forgetting the Goal Make sure you have a good "utility function" for the forecast (e.g., optimizing for resource utilization while forgetting about the consequences of unfulfilled demand): > In the narrow sense, the objective of forecasting is to produce better forecasts. But in the broader sense, the objective is to improve organizational performance — more revenue, more profit, increased customer satisfaction (p.94). ### Unfair objectives I'm kind of surprise by this piece of advice. Wouldn't this be too naïve? In a case of censored demand, this directly contradicts the previous point... > unconstrained true demand is often unobservable, with no reliable way to measure it (…) It is better to report performance against something that can be unambiguously measured (…) If there is not a reliable way to measure true demand, don’ t include it in your performance reporting (p. 95). ## 4. Forecast Value Added Analysis This is mentioned multiple times around the book, and it is as simple as measuring the improvement in a forecast metric of each step in the forecasting process, so that you cut off steps that don't add value. ![[Pasted image 20220109095836.png]] Turns out it is used in AstraZeneca XD (p. 104). ## 5. Forecasting without History Typical stuff you already know (forecasting by analogy, the Delphi method (better told in [[📖 Noise. A Flaw in Human Judgement]]). ## 6. Alternative Approaches to the Problems of Business Forecasting Demand smoothing (make the forecast easier) and supply chain engineering (less need to rely on the forecast). ## 7. Implementing a Forecasting Solution >We all dream of an automated system (…) But this is unrealistic. There are good automated forecasting systems that provide significant benefits by reducing the costs of forecasting **minimizing the analyst time required — and freeing the organization to focus on its most critical and high - value forecasts** (p. 151). Preproject assesment. Do you have data engineering??? >Do you maintain sufficient history (at least three years is preferred) at the necessary level of granularity? Are there processes in place to maintain the integrity of this data (p. 153). ## 8. Practical First Steps 1. Recognize the volatility versus accuracy relationship. 2. Determine inherent and artificial volatility. 3. Understand what accuracy is reasonable to expect. >[[Spyros Makridakis]], [[Hogarth]], and [[Gaba]] state it thusly: …a huge body of empirical evidence has led to the following conclusions (...) Simple models do not necessarily fit past data well but predict the future better than complex or sophisticated models (p. 166). 4. Use forecast value added analysis to eliminate wasted efforts. 5. Utilize meaningful performance metrics and reporting. 6. Eliminate worst practices. 7. **Consult forecasting resources**. Your typical reminder of [[✍️ Refusing to stand on the shoulders of giants]] ## 9. What Management Must Know About Forecasting A bit of repetition from previous chapters in a condensed form. But also: >Estimating the upper bound on forecast accuracy is a more difficult problem with no easy solution. However, be thankful if you are significantly beating a naïve forecast (p. 181). ## Appendix: Forecasting FAQs Yet another section that is a recap. But there are some hidden gems. ### How do we set performance goals? This is a hard pill to swallow in general (although I like to know the ambition on a particular problem) >You should not put a specific number on your forecasting performance goal. You don’t know in advance what accuracy a naïve model will be able to achieve, so you don’t know what specific accuracy you are expected to beat (p. 194). ### We can't seem to reach the level forecast accuracy the business needs! Nassim [[Nassim Nicholas Taleb|Taleb]] in a job outside his role of polemicist! > For an excellent discussion of things to do when you can ’ t forecast as well as you ’ d like, see the October – December 2009 issue of International Journal of Forecasting for a special section on decision making and planning under low levels of predictability, edited by [[Spyros Makridakis]] and [[Nassim Nicholas Taleb]] (p. 196). ### How to further optimize? Let's go down to the proper aggregation level: >Who cares if accuracy is 98% at the highest level of aggregation if you have terrible forecasts at the operational level where it counts? (p. 204). ### How much historical data do we need? Seasonality, we would like to need less data to predict you :/. Between COVID and Filomena, what a disaster: >a common rule of thumb is to have at least three full cycles of observations to detect or estimate the seasonal pattern (p. 222). ### How do you factor returns, substitutions...? **Perfected history** >If you have good records of returns, substitutions, and the like, then you might be able to approximate true demand by making the appropriate adjustments to your historical data. The term perfected history has been used to describe historical data that has been somehow re-created to be more representative of what customers really wanted and when they wanted it (p. 231). ### What to do if the costs associated with forecasting errors are not symmetric? Another one that hits close: >The forecast should still represent an unbiased best guess of what is really going to happen. **If the cost of forecast error is not symmetric, you would take this into consideration in your planning process.** (...) **you still forecast what you really think is going to happen** — what you are really going to sell — but you compensate for asymmetric error costs in your planning decision (p. 234). ### How can you get from unconstrained to constrained forecasts without any planning taking place? Am I mixing something up? Is the term constrained forecast somewhat misleading? First you forecast unconstrained demand, then you impose constraints: > We start the process trying to figure out what customers want and when they want it (a guess at unconstrained true demand). Then we identify constraints related to supply (can we make or procure that much?) or other financial or budgetary constraints. At some point we need to determine what we really think is going to happen, and this I have called the constrained forecast. It represents our best guess at what is really going to happen — how much we are really going to sell (...) **The plan represents a specific course of action that the company decides to undertake** (p. 235) ### When do you stop trying? In life? Never. In forecasting: >If your error is 10% or 20% less than the error of a naïve model, that may be about the best you can ever do, so don’t waste time trying to reach perfection (p. 236) And then focus on smoothing demand. ## Other notes * I'm curious about a reference, a book of [[Donald Wheeler]], [[📖 Understanding Variation]] with [[Statistical Process Control]] techniques.