The Value of Models

Monday, 08 April 2024
By Chris Yapp

20240408_092323

One of my personal quirks over the years is that from time to time I take my various slide decks on different topics and amalgamate them and look at what is now past it’s sell by date and also, what my blind spots have been in the past.

The latest to undergo this treatment was my “ futures deck”. It threw up something I’d probably forgotten. I often have a first slide with a quote to illustrate the purpose of my talk, or workshop contribution. It turned out that one slide in particular had a very long history(25 years), but also felt highly relevant today. The quote is from the statistician, George E. P. Box

“All models are wrong, but some are useful”.

Now, all models whether quantitative or qualitative (mental) are essential to making sense of the world. My problem is not with models per se, but when people become servants to their particular models. An earlier post of mine on the pamphleteers arose from an article that claimed that “comparative advantage” was the closest that economics had to a “law of physics”.

Increased polarisation around economic forecasts has been a feature of the public discourse over the last few years. Organisation X ( such as OBR, Treasury, BoE) will frequently be lambasted for being too pessimistic by one side and too optimistic by the other.

Yet we all live with the consequences of decisions based on the prognostications of those entrusted to tell us when inflation will fall to 2%, or growth will hit 2%, for instance.

For me, the reality is that all economic models are inherently political. Let me illustrate this by a contemporary example. The UK government has not achieved it’s stated aims on UK immigration once since 2010. So, if you were charged with making a forecast for the UK GDP growth, what would you use as your base assumption about the size of the UK working population? If your base assumption was that they would miss that target by 10-50% which would be a generous margin looking at the last 14 years, then how do you deal with accusations of undermining the government. On the flip side, if you use the government’s target as an assumption, how can you claim to be independent?

The late Sir Sam Brittan once told me at the start of may career that “ when a government changes it’s measures, the new measures fit their narrative better than the old ones”. He was not a cynical man but a good observer of the “real world”. So, if you look at GDP in the last few years and compare to GDP per capita over the same period, the government would prefer the former and the opposition the latter.

A few days ago I received yet another article on “ was Einstein wrong?”. The tone of these articles suggests that the physics world model created between 1905 and 1915 was about to be overthrown, ( if only!). I would argue that among the greatest intellectual advances of the 20th century in describing “ reality” is “ incompleteness” .

Even when special and general relativity were first formulated over a 100 years ago, it was known that they were incomplete models. We still have not integrated gravity into the quantum world despite many brilliant minds and many £billions. Yet attempts to prove Einstein “ wrong” have been deeply problematic.

If an organisation gets it’s unemployment, inflation or growth figure forecasts “ wrong” I argue that it is better to focus on the incompleteness of the model and look at what we can learn rather than the specific outcomes of that model.

Let me turn to a more day to day example of models with which we are all familiar, the weather forecast.

If I want to forecast the weather tomorrow, even with the speed and capacity of modern computing, there is a model limit that means that a model that takes 30 or more hours to run and produce outputs is not valuable as a forecast model. Of course what was practically achievable 30 years ago was much more limited than now or in the future. Yet our models are still incomplete.

Fortune-Teller-Crystal-ball-x-no-credit-needed

So, in building a forecasting model compromises need to be made. If there was a giant volcano in Iceland or Indonesia tonight, the impact on weather or climate over days or even years could be highly significant and indeed disruptive to our contemporary understanding of weather and climate systems. Most people, if asked what was the largest volcano of the 19th century would probably suggest Krakatoa (1883). It was actually a much larger explosion in 1815, Tambora. Krakatoa comes to mind quickly because the telegraph system was installed the year before, so it was the first explosion of massive scale where we knew the origin quickly. Building volcanic activity into the model rather than treating it as an external factor is neither realistic nor pragmatic

If you do not know the story, the Wikipedia entry is here. If you fancy yourself as a writer of Netflix disaster series, rewrite Tambora for 2030 with a global population of 8bn!

Now I understand why forecasters of economic outcomes may wish to protect IPR in their models, our collective ability to improve forecasts is hampered. Opinion pollsters have signed up to codes of conduct on their methodologies. Those who have not, I simply ignore.

What I would like to see is what might be called a common template for evaluating differing forecasts.

First, there are the inputs. What are they, from where and how are they sourced?

Second, the modelling process. For example, how does the model evaluate the size of the labour force, including dealing with skill shortages and shifts such as “zero hours” or gig economy working.

Third, are the outputs. The challenge here is the sensitivity of the outputs. For instance, if a model has energy prices as input, what range is catered for? We might say that oil is priced between $60 and $100 within the model. An outcome of say $40 or $150 would be outside the model range.

Fourth, What factors are external to the model? That is to say that they are inherently unpredictable, computationally complex or long tail risks

Finally, I would argue for track record. Where has the model previously been too optimistic or pessimistic?

Around the start of each year there are league tables often published comparing the track records of different forecasting bodies for the previous year. I think the value of these league tables could be enhanced if there was a common reporting regime that would allow the sector

To learn from each other, but also allow users of the outputs to be better informed.

download

Let me return to the weather. Where I live there is a hill a few miles away to the west and we can experience very localised weather. I’ve had days where the weather has changed 5 times while driving to the nearest motorway junction 6 miles away.

The weather apps have improved considerably in the last few years. Providing probabilities of rain by the hour at post code level is useful for maintaining our garden. The number of times that the forecasts are very wrong are quite few. After a forecast of 90% rain for much of the day, when the outcome was dry all day I tried an experiment and looked at the forecast for the neighbouring westerly postcode. Now, when the two diverge I can make my judgement with some supporting evidence.

I wish that we could do that with the outputs of economic forecasts. I don’t think we can get them “right” all the time, but I do think that we could do better than we do now. For those with a mortgage, or businesses facing higher interest rates, even modest improvements could be very helpful.

Simply attacking the OBR, for instance, does nothing to define areas to focus on model improvement, which I believe is both needed and doable.

I started with EP Box, let me finish with Wittgenstein:

Pictures depict. Representations represent.

You may need a stiff drink. Cheers!

svg.lf_footer_svg{ height: 30px; width: 30px; }
Search