Wednesday, January 17, 2018

What to theorize when your theory's rejected

Sommerfeld and Bohr: ad hoc model builders rejecting Newtonian physics ... for action p dx ~ h (ca. 1919)
I was part of an epic Twitter thread yesterday, initially drawn in to a conversation about whether the word "mainstream" (vs "heterodox") was used in natural sciences (to which I said: not really, but the concept exists). There was one sub-thread that asked a question that is really more a history of science question (I am not a historian of science, so this is my own distillation of others' work as well a couple of my undergrad research papers). It began with Robert Waldmann tweeting to Simon Wren-Lewis:
... In natural sciences hypotheses don't survive statistically significant rejection as they do in economics.
Simon's response was:
They do if there is no alternative theory to explain them. The relevant question is what is an admissible theory.
To which both Robert and I said we couldn't think of any examples where this was the case. Simon Wren-Lewis then asks an interesting question about what happens when your theory starts meeting the headwind of empirical rejection:
How can that logically work[?] Do all empirical deviations from the (at the time) believed theory always come along at the same time as the theory that can explain those observations? Or in between do people stop doing anything that depends on the old theory?
The answer to the second question is generally "no". Some examples followed, but Twitter can't really do them justice. So I thought I'd write a blog post discussing some case studies in physics of what happens when your theory's rejected.

The Aether

The one case I thought might be an example where natural science didn't reject a theory (therefore making me qualify that there were no examples in post-war science) was the aether: the substance posited to be the medium in which light waves were oscillating. The truth was that this theory wasn't invented to make sense of any particular observations (Newton thought it explained diffraction), but rather to soothe the intuition of physicists (specifically Fresnel's, who invented the wave theory of light in the early 1800s). If light is a wave, it must be a wave in something, right? The aether was terribly stubborn for a physical theory in the Newtonian era. Some of the earliest issues arose with Fizeau's experiments in the 1850s. The "final straw" in the traditional story was the Michelson and Morely experiment, but experiments continued to test for the existence of "aether wind" for some years later (you could even call this 2009 precision test of Lorentz invariance a test of the aether). 

So here we have a case where a hypothesis was rejected and it was over 50 years between the first rejection and when the new theory "came along". What happened in the interim? Aether dragging. Actually the various experiments were considered confirmation of particular questions about how aether interacts with matter (even including Michelson and Morely's). 

But Fresnel's wave theory of light didn't really need the aether, and there was nothing that the aether did in Fresnel's theory besides exist as a medium for transverse waves. Funny enough, this is actually a problem because apparently aether didn't support longitudinal waves which makes it very different from any typical elastic medium. Looking back on it, it really doesn't make much sense to posit the aether. To me, that implies its role was solely to soothe the intuition; since we as physicists have long given up that intuition we can't really reconstruct how we would think about it at the time in much the same way we can't really imagine what writing looked like to us before we learned how to read.

So in this case study, we have a theory that was rejected and before the "correct" theory came along and physicists continued to use the "old theory". However, the problem with this as an example of Simon's contention is that the existence of the aether didn't have particular consequences for the descriptions of diffraction and polarization (the "old theory") for which it was invented. It was the connection between aether and matter that had consequences — in a sense, you could say this connection was assumed in order to be able to try and measure it. I can't remember the reference, but someone once wrote that the aether experiments seems to imply that nature was conspiring in such a way as to make the aether undetectable!

The Precession of Mercury

This case study brought up by Simon Wren-Lewis better represents what happens in natural sciences when data casts doubt on a theory. Precision analysis of astronomical data in the mid-1800s by Le Verrier led to one of the most high profile empirical errors of Newton's gravitational theory: it got the precession of Mercury wrong by several arc seconds per century. As Simon says: physicists continued to use Newton's "old" theory (and actually do so to this day) for nearly 50 years until the "correct" general theory of relativity came along.

But Newton's old theory was wildly successful (even the observed error was about 40 arc seconds per century). In one century, Mercury travels about 54 million seconds of arc meaning this error is on the order of one in one million. No economic theory is that accurate, so we could say that this case study is actually a massive case of false equivalence.

However, I think it is still useful to understand what happened in this case study. In our modern language, we would say that physicists set a scope condition (region of validity) based on a relevant scale in the problem: the radius of the sun (R). Basically, when the perihelion of the orbit r is large relative to R, other effects potentially enter. And at r/R ~ 2%, this ratio is much larger for Mercury than for any other planet (Mercury is in a 3:2 orbit resonance, tidally locked with the sun). Several ad hoc models of the sun's mass distribution (as well as other effects) were invented to try an account for the difference from Newton's theory (as mentioned by Robert). Eventually general relativity came along (setting a scale — the Schwarzchild radius 2 G M/c² — in terms of the strength of the gravitational field based on the sun's mass M and the speed of light, not its radius). Despite the how weird it was to think of the possibility of e.g. black holes or gravitational waves as fluctuations of space-time, the theory was quickly adopted because it fit the data.

The scale R set up a firewall preventing Mercury's precession from burning down the whole of Newtonian mechanics (which was otherwise fairly successful), and ad hoc theories were allowed to flourish on the other side of that firewall. This does not appear to happen in economics. As Noah Smith says:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.
And as Simon says in his tweet, economists just go on using rejected theory elements and models without limiting its scope or opening the field to ad hoc models. This is also my own experience reading the economics literature.

Old Quantum Theory

Probably my favorite case study is so-called old quantum theory: the collection of ad hoc models that briefly flourished between Planck's quantum of light in 1900 to Heisenberg's quantum mechanics in 1925. Previously, lots of problems started to arise with Newtonian physics (though with the caveat that it was mostly wildly successful as mentioned above). There was the ultraviolet catastrophe (a singularity as wavelength goes to zero) which was related to blackbody radiation. Something was happening when the wavelength of light started to get close to the atomic scale. Until Planck posited the quantum of light, several ad hoc models including atomic motion were invented to give different functional forms for blackbody radiation in much the same way different models of the sun allowed for possible explanations of Mercury's precession.

In much the same way the radius of the sun set the scale for the firewall for gravity, Planck set the scale for what would become quantum effects by specifying a fundamental unit of action (energy × time or momentum × distance) now named after him: h. Old quantum theory set this up as a general principle by saying phase space integrals could only result integer multiples of h (Bohr-Sommerfeld quantization). Now h = 6.626 × 10⁻³⁴ J×s is tiny in terms of our human scale which is related to Newtonian physics being so accurate (and still used today); again using this as a case study for economics is another false equivalence as no economic theory is that accurate. But in the case, Newtonian physics was basically considered rejected within the scope of old quantum theory and stopped being used. That rejection was probably a reason why quantum mechanics was so quickly adopted (notwithstanding its issues with intuition that famously flustered Einstein and continue to this day). Quantum mechanics was invented in 1925, and by the 1940s physicists were working out renormalization of quantum field theories putting the last touches on a theory that is the most precise ever developed. Again, it didn't really matter how weird the theory seemed (especially at the time) because the only important criterion was fitting the empirical data.

There's another way this case study shows a difference between the natural sciences and economics. Old quantum theory was almost immediately dropped when quantum mechanics was developed, and ceased to be of interest except historically. Its one major success lives on in name only as the Bohr energy levels of Hydrogen. However, Paul Romer wrote about economic models using the Bohr model as an analogy for models like the Solow model that I've discussed before. Romer said:
Learning about models in physics–e.g. the Bohr model of the atom–exposes you to time-tested models that found a good balance between simplicity and insight about observables.
Where Romer sees a "balance between simplicity and insight" that might well be used if it were an economic model, this physicist sees a rejected model that's part of the history of thought in physics. Physicists do not learn the Bohr model (you learn of its existence, but not the theory). The Bohr energy level formula turned out to be correct, but today's undergraduate physics students derive it from quantum mechanics not "old quantum theory" using Bohr-Sommerfeld quantization.

A Summary

There is a general pattern where some empirical detail is at odds with a theory in physics:

  • A scale is set to firewall the empirically accurate pieces of the theory
  • A variety of ad hoc models are developed at that new scale where the only criterion is fitting the empirical data, no matter how weird they may seem

I submit that this is not how things work in economics, especially macroeconomics. Simon says we should keep using theories without a scope condition firewall, which Noah says doesn't seem to be thought about at all. New theories, no matter how weird, aren't judged based on their empirical accuracy alone.

But a bigger issue here I think is that there aren't any wildly successful [1] economic models. There really aren't any macroeconomic models accurate enough to warrant building a firewall. This should leave the field open to a great deal of ad hoc theorizing. But in macro, you get DSGE models despite their poor track record. Unless you want to consider DSGE models to be ad hoc models that may go the way of old quantum theory! That's really my view: it's fine if you want to try DSGE model macro and it may well eventually lead to insight. But it really is an ad hoc framework operating in a field that hasn't set any scales because it hasn't had enough empirical success to require them.

...

Footnotes:

[1] Noah likes to tell a story about the prediction of the BART ridership using random utility discrete choice models (I mentioned here). One of the authors of that study has said that result was a bit of a fluke ("However, to some extent, we were right for the wrong reasons.").

Monday, January 15, 2018

Is low inflation ending?


I'm continuing to compare the CPI forecasts to data (new data came out last Friday, shown on the forecast graph for YoY CPI all items [1]). I think the data is starting to coalesce around a coherent story of the Great Recession in the US. As you can see in the graph above, the shock centered at 2015.1 (2015.1 + 0.5 = 2015.6 based on how I displayed the YoY data)  is ending. This implies that (absent another shock to CPI), we should see "headline" CPI (i.e. all items) average 2.5% [2].

It is associated with the shock to the civilian labor force (CLF, at 2011.3), nominal output per worker (NGDP/L, at 2014.6), and the prime-age CLF paricipation rate (in 2011) — all occurring after the Great Recession shock to unemployment (2008.8, see also my latest paper). What we have is a large recession shock that pushed people out of the labor force (as well as reduced uptake of people trying to enter the labor force). This shock is what then caused the low inflation [3] (in terms of CPI or PCE [2]). This process is largely ending and we are finally returning to a "normal" economy [4] nearly 10 years later.

...

Update + 2 hrs

I thought I'd add the graph of the full model over the post-war period (including the guides mentioned in [1]), but also note that two of the three periods David Andolfatto mentions as "lowflation" periods line up with the two negative shocks to CPI (~ 1960-1970, and ~ 2008-2018):


The period 1996-2003 does not correspond to low headline CPI inflation in the same way core PCE inflation was below 2%. However 1996-2003 roughly corresponds to the "dynamic equilibrium" period of CPI inflation as well as PCE inflation (~ 1995-2008) — which in the case of PCE inflation is ~ 1.7% (i.e. below 2%). Therefore the 2% metric for lowflation measured with PCE inflation would actually include the dynamic equilibrium, and not just shocks. Another way to say it is that the constant threshold (at 2%) detector gives a false alarm for 1996-2003, whereas a "dynamic equilibrium detector" does not.

...

Footnotes:

[1] Here is the log derivative (i.e. continuously compounded annual rate of change) and the level (with new dynamic equilibrium guides as diagonal lines at 2.5% inflation rate):


[2] Note that the dynamic equilibrium for core PCE inflation that economists like to use is 1.7%, and so the end of the associated shock will not bring inflation all the way back up to the Fed's stated target of 2%.

[3] Interestingly, this negative shock to inflation happens at the same time as a negative shock to unemployment: i.e. inflation went down at the same time unemployment went down, giving further evidence that the Phillips curve has disappeared.

[4] This is a "normal" economy in the sense of dynamic equilibrium, but it might not seem normal to a large portion of the labor force as there has been only a limited amount of time between the end of the demographic shock of the 1970s and the Great Recession shock of the 2000s. As I've said before, there is a limited amount of "equilibrium" data in this sense (the models above would say ca. 1995 to 2008).

Friday, January 12, 2018

Immigration is a major source of growth

Partially because of the recent news — and most certainly because nearly half this country can be classified as a racist zero-sum mouth-breather — I wanted to show how dimwitted policies to limit immigration can be. One of the findings of the dynamic information equilibrium approach (see also my latest paper) is that nominal output ("GDP") has essentially the same structure as the size of the labor force:


The major shocks to the path of NGDP roughly correspond to the major shocks to the Civilian Labor Force (CLF). Both are shown as vertical lines. The first is the demographic shock of women entering the workforce. This caused an increase in NGDP (the shock to CLF precedes the shock to NGDP). The second major shock is the Great Recession. In that case a shock to NGDP caused people to exit the labor force driving down the labor force participation rate (the shock to NGDP came first). The growth rates look like this (NGDP is green, CLF is purple):


The gray horizontal lines represent the dynamic equilibrium growth rates of CLF (~ 1%) and NGDP (~ 3.8%). The dashed green line represents the effects of two asset bubbles (dot-com and housing, described here). Including them or not does not have any major effects on the results (they're too small to result in statistically significant changes to CLF). You may have noticed that there's an additional shock centered in 2019; I will call that the Asinine Immigration Shock (AIS). 

I estimated the relationship between shocks to CLF and to NGDP. Depending on how you look at it (measuring the relative scale factor, or comparing the integrals relative to the dynamic equilibrium growth rate), you can come up with a factor α between about 4 and 6. That is to say a shock to the labor force results in a shock that is 4 to 6 times larger to NGDP.

Using this this estimate of the contribution of immigration to population growth, I estimated that the AIS over the next four years (through 2022) could result in about 2 million fewer people in the labor force (including people deported, people denied entry, and people who decide to move to e.g. Canada instead of the US). The resulting shock to NGDP [1] using the low end estimate of a factor of α = 4 would result in NGDP that is 1 trillion dollars lower in 2022 [2].  This is what the path of the labor force and nominal output look like:



As you can see, the AIS is going to be a massive self-inflicted wound on this country. What is eerie is that this shock corresponds to the estimated recession timing (assuming unemployment "stabilizes") — as well as the JOLTS leading indicators — implying this process may already be underway. With the positive shock of women entering the labor force ending, immigration is a major (and perhaps only) source of growth in the US aside from asset bubbles [3].

...

Footnotes:

[1] Since I am looking at the results sufficiently following the shock in 2022, it doesn't matter whether which shock comes first (so I show them as simultaneous, centered in January 2019). However, I think the most plausible story is that the shock to CLF would come first followed by a sharper shock to NGDP as the country goes into a recession about 1/2 to 1/3 the size of the Great Recession.

[2] It's roughly a factor of 500 billion dollars per million people (evaluated in 2022) since both NGDP and CLF are approximately linear over time periods of less than 10 years (i.e. 1 million fewer immigrants due to the AIS results in an NGDP that is 500 billion dollar lower in 2022).

[3] I also tried to assess the contribution of unauthorized immigration on nominal output. However the data is limited leaving the effects uncertain. One interesting thing I found however is that the data is consistent with a large unauthorized immigration shock centered in the 1990s that almost perfectly picks up after the demographic shock of women entering the workforce wanes (also in the 1990s). As that shock wanes we get the dot-com bubble, the housing bubble, and the financial crisis. It is possible that the estimate of the NGDP growth dynamic equilibrium may be too high because it is boosted by unauthorized immigration that doesn't show up in the estimates of the Civilian Labor Force.

Wednesday, January 10, 2018

Labor shortages reported by firms

Via Twitter (H/T @traderscrucible), I came across this survey data about firms reporting shortages of "qualified" workers. It looks remarkably correlated with JOLTS data (e.g. here), so I ran it through the dynamic information equilibrium model. In general it works fine, but due to how short the series is there is some ambiguity in the dynamic equilibrium (there are two local minima where one is about 0.07/y, and the other is about 0.11/y). I thought this made for an interesting case study of which model we should believe.

Scenario 1:

0.07/y dynamic equilibrium.
2008.8 recession center (lagging indicator)
overshooting step response to recession
no indication of next recession (lagging indicator)


Scenario 2:

0.11/y dynamic equilibrium.
2008.9 recession center (lagging indicator)
no overshooting
signs of next recession (leading indicator)


Which is it? Neither dynamic equilibrium slope (or any other model parameter) seems wrong -- both are comparable to the 0.098/y value for JOLTS openings or the -0.096/y value for the unemployment rate. My guess is that Scenario 1 is correct because of its consistency as a lagging indicator at the expense of a completely plausible overshooting in survey data. It also seems unlikely that it would go from a measure with one of the longest lags to one with one of the longest leads (assuming the other JOLTS leading indicators are accurate there is another recession in the next year or so). It is of course arguable that the upcoming recession (if it is indeed upcoming) might be a different type of recession compared to the Great Recession and accompanying financial crisis (e.g. the financial crisis was a surprise, whereas low future growth due to a labor "shortage" is more slow-rolling). In either case, auxillary hypotheses are needed to resolve the ambiguity in either direction [1].

Whatever the final resolution, I thought it was fascinating that the outcome of survey data with a somewhat vague question (What does "qualified" mean to the survey respondent? [2] Are the firms answering "yes" to a perceived shortage offering below-market wages?) posed to human beings resulted in data that follows mathematical formula. True, it is probably because it is directly anchored by the unemployment rate. However, using this model we can potentially predict how humans will answer a question in the near future -- a question that I thought would be potentially clouded by politics. People report inflation of the deficit is higher if the President is of the opposite political party, so why wouldn't this affect whether you think it's easy or hard to find "qualified" workers ... and there is of course footnote [2].

...

Footnotes

[1] It should be noted that this isn't an indication of a degenerating research program per Lakatos: eventually more data will resolve the dynamic equilibrium slope.

[2] To a significant fraction of HR managers hiring for particular jobs, "qualified" includes being white and male per numerous studies of e.g. submitting resumes with different genders or names that 'sound black' and 'sound white'.

JOLTS follow-up

I thought I'd also show the plot of the JOLTS quits data to the ensemble of leading indicators forecasts:




Tuesday, January 9, 2018

Happy JOLTS data day

The week after the latest unemployment rate data is released, we have the Job Openings and Labor Turnover Survey (JOLTS) data at FRED. I've been tracking these as potential leading indicators of recessions since last summer. There isn't much change in the results, however I do want to start posting the job openings counterfactual shock estimate alongside the hires. In the leading indicators post, I noted that hires seems to experience its shock earlier than other indicators. However I also noted that I have exactly one recession to work with [1], so that should be taken with a grain of salt. With the latest data, the indicator that came second [2] (i.e. openings) seems to be showing a possible shock as well (but the series is much noisier and therefore uncertain).

Here are the two measures with the latest shock counterfactual (in gray):


And here are animations of the evolution of the shocks counterfactuals:



And finally, here are the latest points on the Beveridge curve (also hinting at a shock which would take it back along the path between the 2001 and 2008 labels on the graph):


Note that my most recent paper available at SSRN talks about these models and theory behind them.

...

Foonotes:

[1] The JOLTS data series on FRED begins in December of 2000, effectively at the start of the 2001 recession, so only one complete recession exists in the data.

[2] The center and width of the shocks to various JOLTS measures: hires, openings, quits, and the unemployment rate:


Monday, January 8, 2018

Qualitative economics done right, part 3

Ed. note: This post is late by almost a year. As mentioned below, part of the reason is that I think Wynne Godley's work has been misrepresented by some of his proponents. I added footnote [1] and the text referencing it, and toned down footnote [3].
This was originally going to be a continuation in a series of posts (part 1, part 2, part 2a) based on an UnlearningEcon tweet:
[Steve] Keen (and Wynne Godley) used their models to make clear predictions about crisis
It was part of a debate about what it means to predict things with a qualitative model. I covered Keen in part 2. This post was going to focus on Wynne Godley. One of Godley's influences on the subject is his "sectoral balances" approach, which is uncontroversial and not exclusively MMT or Post-Keynesian (for example, here is Brad DeLong using the approach).

Now UnlearningEcon says "predictions about crisis" (i.e. how it would play out), not "predictions of crisis" (i.e. that it would occur) which leaves in a large gray area of interpretation. However much of the references to Godley by the heterodox economics community say that he predicted the global financial crisis. As a side note, I wonder if Martin Wolf's FT piece saying Godley helps understand the crisis lent credence to others saying he predicted the crisis?

However in my research, I found that Godley himself doesn't say many of the things attributed to him. He doesn't predict a global financial crisis. He doesn't tell us that the bursting of a housing bubble will lead to a global financial crisis. In the earliest documented source [pdf], Godley says that falling house prices (as already observed in 2006) will lead to lower growth over the next few years (more on this below). This has little to do with "heterodox economics" and in fact is indistinguishable from the story told by mainstream economists like Paul Krugman. For example, Krugman was warning about the effect of a deflating housing bubble on the broader economy in the summer of 2005:
Meanwhile, the U.S. economy has become deeply dependent on the housing bubble. The economic recovery since 2001 has been disappointing in many ways, but it wouldn't have happened at all without soaring spending on residential construction, plus a surge in consumer spending largely based on mortgage refinancing. ... Now we're starting to hear a hissing sound, as the air begins to leak out of the bubble. And everyone ... should be worried.
Unfortunately, Godley's policy note linked above is completely mis-represented in a paper by Dirk Bezemer that I have been directed to on multiple occasions as "documentation" of how the heterodox community predicted the global financial crisis. It was even cited in the New York Times. The paper is “No One Saw This Coming” Understanding Financial Crisis Through Accounting Models [pdf], and its introduction claims that it's simply a survey of economic models that anticipated the crisis:
On March 14, 2008, Robert Rubin spoke at a session at the Brookings Institution in Washington, stating that "few, if any people anticipated the sort of meltdown that we are seeing in the credit markets at present”. ... [‘no one saw this coming’] has been a common view from the very beginning of the credit crisis, shared from the upper echelons of the global financial and policy hierarchy and in academia, to the general public. ... The credit crisis and ensuing recession may be viewed as a ‘natural experiment’ in the validity of economic models. Those models that failed to foresee something this momentous may need changing in one way or another. And the change is likely to come from those models (if they exist) which did lead their users to anticipate instability. The plan of this paper, therefore, is to document such anticipations, to identify the underlying models, to compare them to models in use by official forecasters and policy makers, and to draw out the implications
Godley's paper above is cited and purportedly quoted to provide a basis for using Stock Flow Consistent models because of their supposed validity. Bezemer's purported quotes of Godley are:
“The small slowdown in the rate at which US household debt levels are rising resulting form the house price decline, will immediately lead to a …sustained growth recession … before 2010”. (2006). “Unemployment [will] start to rise significantly and does not come down again.” (2007)
These quotes appear in a table at the end of the paper (p. 51) as well as in the text (p. 36), but neither of these quotes appear in the cited references to Godley. The second one doesn't appear in any form in any of the cited papers that could be construed as Godley (2007) — which is great for Godley as unemployment in the US has since fallen to levels unseen in almost two decades [1]. The first is cobbled together from a few words in a much longer passage in Godley (2006) linked above:
It could easily happen that, if house prices stop rising or if the financial-obligations ratio published by the Fed continues to rise, the debt-to-income ratio will slow down during the next few years, much as it did in the late 1980s and early 1990s. ...
The results are a bit surprising, since the apparently quite small differences between debt levels in the four scenarios generate such huge differences in the lending flows. In particular, Scenario 4, the lowest projection, shows that the debt percentage only has to level off slowly and then fall very slightly for the flow of net lending to fall from 15 percent of income in 2005 to 5 percent in 2010. ...
The average growth rates for 2005–10 come out at 3.3 percent, 2.6 percent, 1.8 percent, and 1.4 percent. The last three projections imply sustained growth recessions—very severe ones in the case of the last two. ...
Is it plausible to suppose that the growth of GDP would slow down so much just because of a fall in lending of this size? Figure 7, which shows past (and projected Scenario 4) figures for net lending combined with successive, overlapping three-year growth rates, suggests that it could. Major slowdowns in past periods have often been accompanied by falls in net lending.
Bezemer also says "This recessionary impact of the bursting of asset bubbles is also a shared view." which is to say that the the predictions of Godley and Keen [2] about the negative impact of a fall in housing prices are not unique to their models. A good example is the aforementioned Krugman quote; he probably didn't use an SFC model or some non-linear system of differential equations.

But the original discussion with UnlearningEcon was about the usefulness of qualitative economic models (per the title of this post). The thing is that Godley's models were quantitative and do look a bit like real data:


Of course the debt data does look a bit like the the counterfactual path shown (in shape, as usual I have no idea what heterodox economists mean when they say "debt" and therefore what their graphs represent; I plotted several different data sources) However, the GDP growth rates miss the giant negative shock associated with the global financial crisis. This means this model definitely misses something because debt did follow the shape of the path Godley used as the worst case scenario.


I wouldn't call this a prediction about the global financial crisis, but rather just a model of the contribution of housing assets to lower GDP growth. But still, it was a quantitative model (one of Godley's sectoral balance models based on the GDP accounting identity). And this is all Godley says it is [3].

Doing the research for this post has given me a newfound respect for Wynne Godley (and Mark Lavoie), but also a real sense of the sloppiness of heterodox economics more broadly including MMT and stock flow consistent approaches. Maybe because it is such a tribal community (see [3]) there is little introspection and genuine peer review. I know from my own efforts that I get few critiques of my conclusions from people who agree with those conclusions. This leads me to try and be my own "reviewer #2" even to the point where I have built two independent versions of the models I show on this blog on separate computers.

...

Footnotes:

[1] People will undoubtedly bring up other measures of unemployment. However these do not appear to contain additional information not captured in the traditional "U3" measure — U6 ~ α U3 for some fixed α.

[2] Bezemer also says that Steve Keen predicted the crisis:
“Long before we manage to reverse the current rise in debt, the economy will be in a recession. On current data, we may already be in one.” (2006)
But in the original source, this is in reference to Australia. Australia hasn't had a recession since 1991 (in September of 2016, Australia had managed to rack up 100 quarters without recession and at 25 [now 26!] years is second only to [now tied with] the Netherlands that went for 26 years from 1982 to 2008).

[3] I do want to take a moment to mention that Wynne Godley and Mark Lavoie are far more reasonable than you might be lead to believe by their proponents out in the Post-Keynesian and MMT community. They'd probably be fine with what I pointed out about SFC models since the "fix" is just adding a parameter.

On Twitter (see the whole thread), there was an excellent example of how the supporters of Godley and Lavoie aren't doing them any favors. Simon Wren-Lewis showed how a non-flat Philips curve implied a Non-Accelerating Inflation Rate of Unemployment [NAIRU]. It's a pretty basic argument ...
If π(t) = E[π(t+1)] - a U + b, there exists a U at which inflation is stable (NAIRU) = b/a.
Post-Keynesian blogger and Godley and Lavoie fan Ramanan said that they (G&L) showed there was an exception, therefore Wren-Lewis's argument was not valid.

Wren-Lewis responded "[t]hat is obviously not a NAIRU model, because you are saying the [Phillips curve] is flat", which is what I also said:
But [Ramanan]'s purported exception has a flat piece, so it's not a counterexample to [Simon Wren-Lewis]'s argument.
I added that
Techncally, [Ramanan]'s [Philips Curve] has two point NAIRUs plus a continuum (between [two of the] points on his graph).
Which turns out is exactly what Mark Lavoie said to Ramanan (and he quoted it on his blog):
Another way to put it is to say that there is an infinite number of NAIRU or a multiplicity of NAIRU (of rates of employment with steady inflation).

Friday, January 5, 2018

Labor market update: comparing forecasts to data

The latest data for the unemployment (U) rate and the (prime age) civilian labor force (CLF) participation rate are available, so I get to test to see if the models have failed or not. Here's the last unemployment model update [1] (includes a discussion of a "structural unemployment") and here's the post about the novel dynamic equilibrium "Beveridge curve" for CLF/U [2] shown below. Now let's add the newest data points (shown in black in the figures below).

First, the unemployment rate forecast remains valid:


And it's still looking better than the history of forecasts from FRB SF and FOMC:


As discussed in the second link [2] above, here are the two CLF forecasts (with and without a shock in 2016):


The "Beveridge curve" (the theory of these dynamic equilibrium "Beveridge curves" is discussed in my latest paper) relating labor force participation to the unemployment rate (a curve you likely would not have seen unless you use the dynamic information equilibrium model) also discussed in [2] is also on track with the latest data:


The shocks to CLF are in red-orange and the shocks to U are green. In the absence of recession shocks, the data should continue to follow the dotted blue line upwards from the black point. However, it is likely that we will have a recession in the mean time, and so — like the rest of the curve — we will probably see the data deviate towards another dynamic equilibrium (another gray hyperbola). The only place I have seen so far where these kinds of Beveridge curves are stable enough to be useful is in the classic Beveridge curve (data for which will be available next Tuesday). This stability arises from both the size and timing of the shocks being approximately equal. In the case above, the shocks to CLF are not only much smaller, but also much later (even years later) which cause the Beveridge curve above to become a spaghetti-like mess.

Thursday, January 4, 2018

Structural breaks, volatility regimes, and dynamic equilibrium

In scanning through the posters, papers, and discussions of the preliminary schedule of the upcoming ASSA 2018 meeting in Philadelphia I found a lot of interesting sessions (e.g. two machine learning sessions). As a side note, those who think economics ignores alternative approaches should note the (surprising number of) sessions on institutionalist, Marxian, Feminist, and other heterodox approaches.

One poster from the student poster session caught my eye — in particular the identification of low volatility and high volatility regimes in the S&P 500:


That's from "Structural Breaks in the Variance Process and the Pricing Kernel Puzzle" by Tobias Sichert [pdf]. It seems these low volatility and high volatility regimes line up with the transition shocks of the dynamic information equilibrium model (green line):


The top picture is the dynamic information equilibrium model with shock widths (full width at half maximum, described here). The bottom graph is Sichert's paper's structural breaks (black indicating the start of a low volatility regime, red indicating the start of a high volatility one per the figure at the top of this post). However, the analysis started at 1992, so that isn't so much the beginning of a low-volatility regime as the beginning of the data being looked at (therefore I indicated it with a dashed line). I colored in the high volatility regime with light red, and we can see these regions line up with the shock regions in the dynamic equilibrium model. The late 1990s early 2000s is seen as a single high volatility regime in Sichert's analysis and the Great Recession seems to continue for awhile after the initial shock — possibly due to step response? However, overall volatility looks like a good independent metric to identify periods of dynamic equilibrium (low volatility) and shocks (high volatility).

Wednesday, January 3, 2018

Canada's below-target inflation


Some years ago I had predicted that Canada would begin to undershoot its 2% inflation target, and then touted the success of the information transfer monetary model when that prediction came true. However I mostly see the monetary model as at best a local approximation with the dynamic information equilibrium model being better empirically (discussion in terms of US inflation at this linked post).

To that end, I thought I'd put together how you'd look at Canada's below-target inflation in terms of the dynamic information equilibrium model (of all items CPI). In this case, the dynamic equilibrium is approximately 2%, and the undershooting is due to a long-duration shock possibly triggered by the global financial crisis/Great Recession.



The first graph is the full CPI level dataset from FRED. The second shows a more recent CPI level data. The third shows year-over-year inflation. The main shocks are the demographic shock centered at 1978.65 ± 0.04 (width [1] = 3.0 y) and the post-crisis shock is centered at 2017.7 ± 4.9, with a width of 3.3 years. There are two additional shocks in 1991 and 1993 to deal with the bump in the CPI.

...

Footnotes

[1] I've been a bit sloppy on this blog about what I mean by the "width" of a transition, although I nearly always use the "width" or "inverse steepness" parameter $b_{0}$ of the logistic function

$$
f(t) = \frac{a_{0}}{1+e^{-\frac{t-t_{0}}{b_{0}}}}
$$

Since the derivative is nearly a Gaussian function, we can think of the 1-standard deviation width $\sigma$, which is approximately

$$
\sigma \approx \sqrt{\frac{8}{\pi}} b_{0} \simeq 1.6 b_{0}
$$

based on matching the leading order of the Taylor series. The other possible measure is the full width at half maximum ($FWHM$) which is

$$
FWHM = 2 b_{0} \log \left(3 + 2\sqrt{2} \right) \simeq 3.5 b_{0}
$$

Therefore if $b_{0} \simeq 3.0\;\text{y}$ means $\sigma \simeq 4.8\;\text{y}$, and $FWHM \simeq 10.6 \;\text{y}$. Using the $\sigma$ measure, 95% of the shock occurs within $4 \sigma$ distances (i.e. $2 \sigma$ on either side) or 19.1 years.