Friday, August 26, 2016

Is the information equilibrium interest rate model wrong?

There's really no specific evidence right now. This speculative extrapolation based on not much else besides assuming when the Fed increased interest rates QE3 would unwind as fast as it started up is clearly wrong. However, the information equilibrium (IE) model doesn't tell us exactly how fast the variables involved in a given IE relationship change so my speculation was probably unwarranted. If left to market forces, the variables should basically follow a random walk towards a new equilibrium.

The interest rate model (see the paper) is basically

r = c log NGDP/MB + b

where r is some short term interest rate (we'll use the effective Fed funds rate), MB is the monetary base, and c and b are parameters. This model predicted that a rise in the interest rate r would result in a fall in MB. So far, it is falling ... slowly. David Beckworth noted what he called passive unwinding. Basically, in the IE model version, NGDP continues to grow which causes the (information) equilibrium interest rate to rise. Eventually, the equation above holds.

But is that really validating the model? How will we know? This post is an attempt to understand that question. First, let's look at some straight forward extrapolations of the trends in the monetary base -- a kind of baseline model:

The gray line and gray dashed line are the currency component and its log-linear extrapolation. The black line is the base, the black dashed lines are the linear and log-linear extrapolations of the recent trend since 2016 started (they're not very different, so I won't discuss them further). The dotted line is the log-linear extrapolation of the base from 1965 to 1999 (when the base was fairly log-linear).

Ok, those are the baselines -- what does the IE model say? Let's assume the model is perfect, that NGDP is on a log-linear path from 2013, and ask: what MB should we expect given an effective Fed funds rate r? Here's that model (blue, with two standard deviation error bands in yellow):

Now let's zoom in on the interesting part:

The IE model predicts movement towards the straight piece of the blue curve. My speculative extrapolation is shown in red and the most recent data (actually weekly source base data) is in orange. The new data is outside the error band and appears to be following the linear path (dashed black line). I would say that if the orange data continues to follow the dashed black path until it intersects with the 1965-1999 base extrapolations (dotted black), the interest rate model could be considered useless (in this sense).

There are two other possible factors that could ameliorate this data problem with the IE model. The first one is that NGDP could suddenly rise. This is almost the neo-Fisher view: a rise in interest rates leads to a rise in nominal output, a combination of real output and inflation (instead of just inflation). The model rejection above assumes a log-linear path of future GDP.

The second one is that our current situation represents a risk of recession. In the past, short term interest rates above the IE model path have been a precursor to recessions (the model above is inverted so that recession risk is associated with  the monetary base being below equilibrium specified by the interest rate r, but it is mathematically the same thing). This indicator is actually closely related to an inverted yield curve -- a standard indicator of slower growth or recession.

As I said before, the interest rate hike of December 2015 was a good test of the information equilibrium model. We should get a falling monetary base (faster than we've seen so far), a very large uptick in growth/inflation (the "neo-Fisher" outcome -- I seriously doubt this one), or a recession. In fact, Japan has been dealing with a low interest rate/low inflation environment much longer than we have and it seems there is an uptick in the number of mild recessions. I guess we'll see what happens.


PS The different extrapolations make me think of the GUT scale.

Wednesday, August 24, 2016

Efficient markets and the Challenger disaster

Ever the market boosters, Marginal Revolution has a new video out in its personal finance series that talks about the efficient markets hypothesis. Aside from the fact that it might be questionable to base financial decisions on an hypothesis. I haven't watched the video, but from the still it appears to reference the Challenger disaster. It's an interesting story propagated by believers in the wisdom of crowds. On the surface, the market appears to have discovered the problem was with the solid rocket boosters since Morton Thiokol's stock dropped more than the other NASA contractors involved with the shuttle program. Here's a paper [pdf] that investigates it. Now of course this could be attributable to larger exposure to NASA (Thiokol had about twice as much revenue from its shuttle program per the pdf) as well as Thiokol being a smaller, less diversified company than Lockheed, Marietta, or Rockwell at the time (see John Quiggin here). Here is a graph of stock prices from here:

But what does the information equilibrium model have to say?

The key piece of information comes from the study referenced above. Average daily returns for the previous three months was given in Table 1: Lockheed (0.07%), Marietta (0.14%), Rockwell (0.06%) and Thiokol (0.21%). If we assume all of these companies are information equilibrium with the same underlying process X, these differential growth rates imply different information transfer (IT) indices. For example, the IT index k -- well, actually it's k - 1 since log p ~ (k-1) log X -- is about three times higher for Thiokol than for Lockheed. This means that even given the same source of information, Thiokol will respond quite a bit more than Lockheed to the same shock. And some simulations bear this out; here's a typical example based on the growth and volatility in the paper cited above:

Note that the underlying process X is the same (a Wiener process with constant drift and volatility) but are different realized values. Here's a Monte Carlo with 100 throws per company:

In the information equilibrium model, the prices seem perfectly consistent with all four contractors being hit with the same information shock -- and  therefore there's no evidence the market figured out the cause within minutes of the disaster.


PS My grade school mascot was the Challenger shuttle (I grew up in the suburbs of Houston).

PPS I got to take a tour of the orbiter processing facility while NASA was preparing Discovery, Atlantis, and Endeavor were being prepared for the museums. Here's Discovery in the OPF with its aerodynamic engine cover before being flown to Washington, DC:

Tuesday, August 23, 2016

A vector of information equilibrium relationships

This is a mathematical interlude that looks at some geometric interpretations of an ensemble of information equilibrium relationships. It represents some notes for some future work.

Let's start with a vector of information equilibrium relationships between output in a given sector $y_{i}$ and the money supply $p_{i} : y_{i} \rightleftarrows m$ so that

\frac{dy_{i}}{dm} = A_{ij}(m) y_{j}

The solution to this differential equation is

y_{i}(m) = \left[ \exp \int_{m_{ref}}^{m} dm' A_{ij}(m') \right] y_{j}(m_{ref})

if $A(m) = K/m$ (i.e. if $A(m_{1})A(m_{2}) = A(m_{2})A(m_{1})$ but not generally, see Magnus expansion) so that

y_{i}(m) = \left[ \exp \left( K_{ij} \log \frac{m}{m_{ref}} \right) \right] y_{j}(m_{ref})

The volume spanned by these vectors (spanning the economic output space) is

V = \det \exp \left( K \log \frac{m}{m_{ref}} \right) \approx 1 + \log \frac{m}{m_{ref}} \;\text{tr}\; K

So that the infinitesimal volume added to the economy is

dV = \left( \log \frac{m}{m_{ref}} \right) \;\text{tr}\; K

Using maximum entropy to select one of multiple equilibria

Some time ago, I mentioned the idea [1] that maximum entropy could select a particular Arrow-Debreu equilibrium when there are many available; I thought I'd work through a specific example where that could work using a simple 2D Edgeworth box. Let's assume a utility function for agent 1 (borrowed from here [pdf]):

u1(g1, g2) = g1 - 0.125 g2-8

with g1 and g2 exchanged for the other agent. The offer curves (blue, yellow for the two agents) in the 2D Edgeworth box look like this (for an initial endowment given in the pdf link above):

These curves intersect in 4 points, three of which are very close to each other (and hard to see). Which equilibrium relative price (slope through the initial endowment and the point) does the market select? Traditional economics lacks a solution to this problem -- all three are viable equilibria. However, the point in the center of the triplet of points has higher entropy (consider the joint entropy of the distributions with a probability of finding an infinitesimal unit of good 1 with agent 1 versus agent 2 and likewise for good 2). You can see that if you zoom in on those points; I show an information entropy level curve (unconstrained maximum in the center of the Edgeworth box) as a dotted gray line.

The point in the middle is the maximum entropy point, subject to all the constraints in the utility maximization problem.



[1] I also mentioned it here. At that link, I also mentioned a potential solution to the so-called aggregation problem where one looks at traces (differential volume elements) -- those volume elements would be related to the state space volumes I use to look at maximum entropy. This footnote is intended mostly as a note to myself.

Monday, August 22, 2016

IE vs NY Fed DSGE model update

I haven't updated the head-to-head with the NY Fed DSGE model -- by that I mean the 2014 vintage of that model. The model and the forecast has been changed several times since 2014, including in May of this year a month after the last update from me. The model now only projects a year ahead (as opposed to the nearly 4 years of the original vintage 2014 model).

And the saddest part? The original 2014 vintage of the model is doing an amazing job! The core PCE inflation data has been heavily revised [1] and what previously looked like a blow out for the IE model has turned into a slight advantage for the vintage 2014 DSGE model.

Still, we'll probably have to wait until the beginning of 2017 to know which model is better. This is consistent with the expected performance of the IE model: observation times shorter than a few years are dominated by irreducible measurement error.


Update 23 August 2016

Is the NY Fed DSGE model following a ringing oscillation from the financial crisis?



[1] These revisions have been almost enough to make me reconsider rejecting this lag model

A trend towards lower inflation in Australia (IE prediction)

This recent post by John Quiggin reminded me of my prediction of a trend towards undershooting inflation in Australia (here and here). A commenter on this post going by Anti said unique predictions that fit empirical evidence are "the real question"; I'd say this is a unique prediction of the information equilibrium model that fits the empirical evidence:

Saturday, August 20, 2016

Did the ACA decrease unemployment?

Scott Sumner repeated his unsupported claim that the expiration of unemployment insurance in 2014 decreased unemployment. It was picked up by John Cochrane and Tyler Cowen. I looked at the data a year ago and showed that a sizable chunk of it could be explained by increasing job openings (and assuming a matching model) in the health care sector brought on by the ACA going into effect in 2014:

Additional jobs would be created via a Keynesian multiplier. I tweeted about this and was asked how much higher unemployment would have been without the ACA and estimated 0.5 percentage points higher.

That estimate was loosely based on this model of employment equilibrium; however I thought I'd look at it a bit more rigorously. I re-ran the model fitting only to the data before 2014 and found that the impact was even larger at 1.3 percentage points:

It is true that a lot of things went into effect at the same time, but using a typical Keynesian multiplier of 1.5 accounts for about half of the boom in the total number of jobs and the biggest increase in openings was in health care. That's a pretty consistent story.

Japan (lack of) inflation update

I haven't updated the price level model for Japan in awhile (last update here, and here is the link to all the ongoing forecasts for other countries and indicators), so here you go:

Friday, August 19, 2016

DSGE, part 5 (summary)

I've just finished deriving a version of the three-equation New Keynesian DSGE model from a series of information equilibrium relationships and a maximum entropy condition. We have

\Pi & \rightleftarrows N \;\text{ with IT index } \alpha\\
X & \rightleftarrows C \;\text{ with IT index }1/\sigma\\
R & \rightleftarrows \Pi_{t+1} \;\text{ with IT index }\lambda_{\pi}\\
R & \rightleftarrows X_{t+1} \;\text{ with IT index }\lambda_{x}

along with a maximum entropy condition on the intertemporal consumption $\{ C_{t} \}$ subject to a budget constraint:

C_{t+1} = R_{t} C_{t}

We can represent these graphically

These stand for information equilibrium relationships between the price level $\Pi$ and nominal output $N$, real output gap $X$ and consumption $C$, nominal interest rate $R$ and the price level, and the nominal interest rate and the output gap $X$. These yield

r_{t} & = \lambda_{\pi} \; E_{I} \pi_{t+1} + \lambda_{x} \; E_{I} x_{t+1} + c\\
x_{t} & = -\frac{1}{\sigma} \left( r_{t} - E_{I} \pi_{t+1}\right) + E_{t} x_{t+1} + \nu_{t}\\
\pi_{t} & = E_{I} \pi_{t+1} + \frac{\alpha}{1-\alpha}x_{t} + \mu_{t}

with information equilibrium rational (i.e. model-consistent) expectations $E_{I}$ and "stochastic innovation" terms $\nu$ and $\mu$ (the latter has a bias towards closing the output gap -- i.e. the IE version has a different distribution for its random variables). With the exception of a lack of a coefficient for the first term on the RHS of the last equation, this is essentially the three equation New Keynesian DSGE model: Taylor rule, IS curve, and Philips curve (respectively).

One thing I'd like to emphasize is that although this model exists as a set of information equilibrium relationships, they are not the best set of relationships. For example, the typical model I use here (here are some others) that relates some of the same variables is

\Pi : N & \rightleftarrows M0 \;\text{ with IT index } k\\
r_{M} & \rightleftarrows p_{M} \;\text{ with IT index } c_{1}\\
p_{M} : N & \rightleftarrows M \;\text{ with IT index } c_{2}\\
\Pi : N & \rightleftarrows L \;\text{ with IT index } c_{3}\\

where M0 is the monetary base without reserves and $M =$ M0 or MB (the monetary base with reserves) and $r_{M0}$ is the long term interest rate (e.g. 10-year treasuries) and $r_{MB}$ is the short term interest rate (e.g 3-month treasuries). Additionally, the stochastic innovation term in the first relationship is directly related to changes in the employment level $L$. In part 1 of this series, I related this model to the Taylor rule; the last IE relationship is effectively Okun's law (in terms of hours worked here or added with capital to the Solow model here -- making this model a kind of weird hybrid of a RBC model deriving from Solow and a monetary/quantity theory of money model).

Here is the completed series for reference:
DSGE, part 1 [Taylor rule] 
DSGE, part 2 [IS curve] 
DSGE, part 3 (stochastic interlude) [relates $E_{I}$ and stochastic terms] 
DSGE, part 4 [Phillips curve]
DSGE, part 5 [the current post]

DSGE, part 4

In the fourth installment, I am going to build one version of the final piece of the New Keynesian DSGE model in terms of information equilibrium: the NK Phillips curve. In the first three installments I built (1) a Taylor rule, (2) the NK IS curve, and (3) related expected values and information equilibrium values to the stochastic piece of DSGE models. I'm not 100% happy with the result -- the stochastic piece has a deterministic component -- but then the NK DSGE model isn't very empirically accurate.

Let's start with the information equilibrium relationship between nominal output and the price level $\Pi \rightleftarrows N$ so that we can say (with information transfer index $\alpha$, and using the definition of the information equilibrium expectation operators from here)

E_{I} \pi_{t+1}- E_{I} \pi_{t} = \alpha \left( E_{I} n_{t+1}- E_{I} n_{t} \right)

Using the following substitutions (defining the information equilibrium value in terms of an observed value and a stochastic component, defining the output gap $x$, and defining real output)

E_{I} a_{t} & \equiv a_{t} - \nu_{t}^{a}\\
x_{t} & \equiv E_{I} y_{t} - y_{t}\\
n_{t} & \equiv y_{t} + \pi_{t}

and a little bit of algebra, we find

\pi_{t} & = E_{I} \pi_{t+1} + \frac{\alpha}{1-\alpha} x_{t} + \mu_{t}\\
\mu_{t} & \equiv \nu_{t}^{\pi} - \frac{\alpha}{1-\alpha} \nu_{t}^{y} -\frac{\alpha}{1-\alpha} (E_{I} y_{t+1} - E_{I} y_{t})

The first equation is essentially the NK Phillips curve; the second is the "stochastic" piece. One difference from the standard result is that there is no discount factor applied to future information equilibrium inflation (the first term of the first equation). A second difference is that the stochastic piece actually contains information equilibrium real growth (the last term). In a sense, it is a biased random walk towards reducing the output gap.

Anyway, this is just one way to construct a NK Phillips curve. I'm not 100% satisfied with this derivation because of those two differences; maybe a better one will come along in a later update.