Wednesday, July 19, 2017

What mathematical theory is for

Blackboard photographed by Spanish artist Alejandro Guijarro at the University of California, Berkeley.

In the aftermath of the Great Recession, there has been much discussion about the use of math in economics. Complaints range from "too much math" to "not rigorous enough math" (Paul Romer) to "using math to obscure" (Paul Pfleiderer). There are even complaints that economics has "physics envy". Ricardo Reis [pdf] and John Cochrane have defended the use of math saying it enforces logic and that complaints come from people who don't understand the math in economics.

As a physicist, I've had no trouble understanding the math in economics. I'm also not adverse to using math, but I am adverse to using it improperly. In my opinion, there seems to be a misunderstanding among both detractors and proponents of what mathematical theory is for. This is most evident in macroeconomics and growth theory, but some of the issues apply to microeconomics as well.

The primary purpose of mathematical theory is to provide equations that illustrate relationships between sets of numerical data. That what Galileo was doing when he was rolling balls down inclined planes (comparing distance rolled and time measured with water flowing), discovering distance was proportional to the square of the water volume (i.e. time).

Not all fields deal with numerical data, so math isn't always required. Not a single equation appears in Darwin's Origin of Species, for example. And while there exist many cases where economics studies unquantifiable behavior of humans, a large portion of the field is dedicated to understanding numerical quantities like prices, interest rates, and GDP growth.

Once you validate the math with empirical data and observations, you've established "trust" in your equations. Like a scientist's academic credibility letting her make claims about the structure of nature or simplify science to teach it, this trust lets the math itself become a source for new research and pedagogy.

Only after trust is established can you derive new mathematical relationships (using logic, writing proofs of theorems) using those trusted equations as a starting point. This is the forgotten basis in Reis' claims about math enforcing logic. Math does help enforce logic, but it's only meaningful if you start from empirically valid relationships.

This should not be construed to require models to start with "realistic assumptions". As Milton Friedman wrote [1], unrealistic assumptions are fine as long as the math leads to models that get the data right. In fact, models with unrealistic assumptions that explain data would make a good scientist question her thinking about what is "realistic". Are we adding assumptions we feel in our gut are "realistic" that don't improve our description of data simply because we are biased towards them?

Additionally, toy models, "quantitative parables", and models that simplify in order to demonstrate principles or teach theory should either come after empirically successful models and establish "trust", or they themselves should be subjected to tests against empirical data. Keynes was wrong when he said that one shouldn't fill in values in the equations in a letter to Roy Harrod. Pfleiderer's chameleon models are a symptom of ignoring this principle of mathematical theory. Falling back to claims a model is a simplified version of reality when it fails when compared to data should immediately prompt questions of why we're considering this model at all. Yet Pfleiderer tells us some people consider this argument a valid defense of their models (and therefore their policy recommendations).

I am not saying that all models have to perform perfectly right out of the gate when you fill in the values. Some will only qualitatively describe the data with large errors. Some might only get the direction of effects right. The reason to compare to data is not just to answer the question "How small are the residuals?", but more generally "What does this math have to do with the real world?" Science at its heart is a process for connecting ideas to reality, and math is a tool that helps us do that when that reality is quantified. If math isn't doing that job, we should question what purpose it is serving.  Is it trying to make something look more valid than it is? Is it obscuring political assumptions? Is it just signaling abilities or membership in the "mainstream"? In many cases, it's just tradition. You derive a DSGE model in the theory section of a paper because everyone does.

Beyond just comparing to the data, mathematical models should also be appropriate for the data.

A model's level of complexity and rigor (and use of symbols) should be comparable to the empirical accuracy of the theory and the quantity of data available. The rigor of a DSGE model is comical compared to how poorly the models forecast. Their complexity is equally comical when they are outperformed by simple autoregressive processes. DSGE models frequently have 40 or more parameters. Given only 70 or so years of higher quality quarterly post-war data (and many macroeconomists only deal with data after 1984 due to a change in methodology), 40 parameter models should either perform very well empirically or be considered excessively complex. The poor performance ‒ and excessive complexity given that performance ‒ of DSGE models should make us question the assumptions that went into their derivation. The poor performance should also tell us that we shouldn't use them for policy.

A big step in using math to understand the world is when you've collected several different empirically successful models into a single paradigm or framework. That's what Newton did in the seventeenth century. He collected Kepler's, Galileo's, and others' empirical successes into a framework we call Newtonian mechanics.

When you have a mathematical framework built upon empirical successes, deriving theorems starts to become a sensible thing to do (e.g. Noether's theorem in physics). Sure, it's fine as a matter of pure mathematics to derive theorems, but only after you have an empirically successful framework do those theorems have implications for the real world. You can also begin to understand the scope of the theory by noting where your successful framework breaks down (e.g. near the speed of light for Newtonian mechanics).

A good case study for where this has gone wrong in economics is the famous Arrow-Debreu general equilibrium theorem. The "framework" it was derived from is rational utility maximization. This isn't a real framework because it is not based on empirical success but rather philosophy. The consequence of inappropriately deriving theorems in frameworks without empirical (what economists call external) validity is that we have no clue what the scope of general equilibrium is. Rational utility maximization may only be valid near a macroeconomic equilibrium (i.e. away from financial crises or recessions) rendering Arrow-Debreu general equilibrium moot. What good is a theorem telling you about the existence of an equilibrium price vector when it's only valid if you're in equilibrium? That is to say the microeconomic rational utility maximization framework may require "macrofoundations" ‒ empirically successful macroeconomic models that tell us what a macroeconomic equilibrium is.

From my experience making these points on my blog, I know many readers will say that I am trying to tell economists to be more like physics, or that social sciences don't have to play by the same rules as the hard sciences. This is not what I'm saying at all. I'm saying economics has unnecessarily wrapped itself in a straitjacket of its own making. Without an empirically validated framework like the one physics has, economics is actually far more free to explore a variety of mathematical paradigms and empirical regularities. Physics is severely restricted by the successes of Newton, Einstein, and Heisenberg. Coming up with new mathematical models consistent with those successes is hard (or would be if physicists hadn't developed tools that make the job easier like Lagrange multipliers and quantum field theory). Would-be economists are literally free to come up with anything that appears useful [2]. Their only constraint on the math they use is showing that their equations are indeed useful ‒ by filling in the values and comparing to data.

Footnotes:

[1] Friedman also wrote: "Truly important and significant hypotheses will be found to have 'assumptions' that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense) (p. 14)." This part is garbage. Who knows if the correct description of a system will involve realistic or unrealistic assumptions? Do you? Really? Sure, it can be your personal heuristic, much like many physicists look at the "beauty" of theories as a heuristic, but it ends up being just another constraint you've imposed on yourself like a straitjacket.

[2] To answer Chris House's question, I think this freedom is a key factor for many physicists wanting to try their hand at economics. Physicists also generally play by the rules laid out here, so many don't see the point of learning frameworks or models that haven't shown empirical success.

Python!


I have put together the 0.1-beta version of IEtools (Information Equilibrium tools) for python (along with a demo jupyter notebook looking at the unemployment rate and NGDP/L). Everything is available in my GitHub repositories. The direct link to the python repository is:

https://github.com/infotranecon/IEtools

While I still love Mathematica, (and will likely continue to use it for most of my work here), python is free for everybody.

Tuesday, July 18, 2017

UK Unemployment 1860-1915 (dynamic equilibrium model)

In addition to challenging the dynamic equilibrium model with a longer time series of US data [1], John Handley also challenged it with the long time series UK data (available here from FRED). I focused on the pre-WWI data because I already looked at the post war data here and the interwar period is strongly affected by the "conscription" shocks seen in [1]. Anyway, the results are decent:


The centers of the shocks are at 1861.5, 1866.9, 1876.3, 1884.6, 1892.5, 1902.1, and 1908.3. I think I might split the 1902.1 shock into two shocks.

These recessions roughly correspond with the Panic of 1857, the post-civil war UK recession, the "Long Depression" (1876, 1884, 1892) beginning with the Panic of 1873. The 1902 and 1908 shocks do no correspond to recessions listed at Wikipedia (which of course is definitive and exhaustive).

Dynamic equilibrium model: CPI (all items)

With the new CPI data out last week, I updated the dynamic equilibrium model for all items I looked in this post:



The former uses 90% confidence intervals, while the latter graph uses MAD as the measure of model uncertainty since the derivative of CPI (all items) is very volatile.

Sunday, July 16, 2017

Presentation: macroeconomics and ensembles

I put together a new presentation that looks at macroeconomics, partially inspired by this blog post. It can be downloaded in pdf format from this link. The rest of the slides are below the fold.




Friday, July 14, 2017

Unemployment 1929-1968 (dynamic equilibrium model)

John Handley challenged me to take on the 1930s, 40s, and 50s with the dynamic equilibrium model. While the Great Depression is fairly uncertain (because the model operates on the log of the unemployment rate, returning to the linear scale makes the bands exponentially larger for higher unemployment rates), the overall model works well:


In the interest of making the optimization run in a reasonable time, I split the data for the 30s and the 50s, keeping the 40s in common between them. Here's the fit for the 30s and 40s (illustrating that exponential increase in the width of the error bands):


Per the original thread, it is hard to see any effects of the New Deal (possibly it arrested the increase of the unemployment rate, similar to the possible effect of the various actions in 2008-9). Overall, the best thing to say is we don't know.

The remaining time period from the 40s through the 60s (to overlap with the original model) has smaller confidence intervals:


In both of these graphs we do however see a potential effect of the draft: WWII and Korea. There are two fairly large positive shocks centered in 1942.6 and 1950.5.

Another feature of the data from the 50s and 60s (also apparent in the 70s and 80s, but gradually disappears over time) is a reproducible "overshooting" effect (which I highlighted in green, scribbling on the graph):


So maybe Steve Keen's models could be useful ... for second order effects in the dynamic equilibrium framework. Whatever causes it seems to fade out by the 1990s (which coincidentally is around the time the non-equilibrium effect of women entering the workforce fades out).

Thursday, July 13, 2017

Keynes versus Samuelson


I read through Roger Farmer's book excerpt up at Evonomics (and have subsequently bought the book). I tweeted a bit about it, but I think one point I was trying to make is better made with a blog post. Farmer's juxtaposition of Samuelson's neoclassical synthesis and Hick's IS-LM model made clear in my mind the way to understand "what went wrong":
The program that Hicks initiated was to understand the connection between Keynesian economics and general equilibrium theory. But, it was not a complete theory of the macroeconomy because the IS-LM model does not explain how the price level is set. The IS-LM model determines the unemployment rate, the interest rate, and the real value of GDP, but it has nothing to say about the general level of prices or the rate of inflation of prices from one week to the next. 
To complete the reconciliation of Keynesian economics with general equilibrium theory, Paul Samuelson introduced the neoclassical synthesis in 1955. According to this theory, if unemployment is too high, the money wage will fall as workers compete with each other for existing jobs. Falling wages will be passed through to falling prices as firms compete with each other to sell the goods they produce. In this view of the world, high unemployment is a temporary phenomenon caused by the slow adjustment of money wages and money prices. In Samuelson’s vision, the economy is Keynesian in the short run, when some wages and prices are sticky. It is classical in the long run when all wages and prices have had time to adjust.
There are two ways to think about what it means for the IS-LM model to fail to explain the price level:

  1. It is a model of the short run -- so short that prices do not change appreciably from inflation during the observation time. Symbolically, t << 1/π such that P ~ exp(π t) ≈ 1 + π t ≈ 1. This is Samuelson's version: prices are "sticky" in the short run, but adjust in the long run.
  2. It is a model of a low inflation economy. Inflation is so low that the price level can be considered approximately constant (and therefore real and nominal just differ by a scale factor). Symbolically, d log P/dt = π ≈ 0 << γ where γ is some other growth scale such as interest rates, NGDP, or population (I'd go with the former [1]).

These two scenarios overlap in some short run (because d log P/dt ~ π t); the difference is that inflation can be low much longer than the price level can be low. This distinguishes Keynes' view of e.g. persistent slumps versus Samuelson's view of eventual adjustment. I've made the case that the IS-LM model should be understood as an effective theory derived from the second limit, not the first.

As an aside, not that Samuelson's limit doesn't make empirical sense in today's economy. Inflation is on the order of π ~ 2%, which implies a time horizon (1/π) of 50 years (and therefore IS-LM should apply for 5 or so years to an accuracy of about 10%). That's a pretty long short run.

Another point I'd like to make is that the second limit is more definitively a "macro" limit in the sense that it is about macro observables (low inflation) rather than micro observables (sticky prices). In fact, we can consider the second limit as "macro stickiness": individual prices fluctuate (i.e. aren't sticky), but the aggregate distribution is relatively constant (statistical equilibrium). We can further connect the second limit (and the potential for a persistent slump) to properties of the effective theory of the ensemble average of that statistical equilibrium (namely that 〈k〉 - 1 ≈ 0). This is all to say that the second limit is a true emergent "macro" theory that can be understood without much knowledge of the underlying agents (or another way, ignorance of how agents behave).

...

Footnotes:

[1] This also opens up discussion of the "liquidity trap". If interest rates r are the proper scale to compare to inflation, we have an additional regime we need to understand where both inflation and interest rates are near zero (π, r ≈ 0).

Wednesday, July 12, 2017

JOLTS leading indicators?

In what is becoming a mini-series on the utility of the forecasts made using the information equilibrium/dynamic equilibrium framework (previous installment here), I wanted to assess whether the Job Openings and Labor Turnover survey (JOLTS) data could be used as leading indicators of a potential recession (I looked at the various measures previously here).

The latest data for job openings seems to be showing the beginnings of a correlated negative deviation that could be the start of a downturn:


The other indicators were more mixed, so I asked myself: does one measure show a recession earlier than the others? A note before we start -- this analysis is based on a sample of one recession (JOLTS only goes back to the early 2000s), so we should take it with a bit more than a grain of salt.

I looked at the estimate of the center of the 2008 recession transition for hires rate (HIR), job openings rate (JOR), quits rate (QUR), and the unemployment rate (UNR):


The errors shown are 2 standard deviations, and the months on the x-axis are the months that the data points are for (the data usually becomes after another month or so, e.g. May 2017 data was released 11 July 2017). We can see that hires leads the pack -- i.e. the center of the hires transition precedes the other measures.

Note this is not the same thing as figuring out when a transition becomes detectable. I looked at this using unemployment rate data back in April. Two factors enter into the detectability: the width of the transition and the relative measurement noise level. While most of the data has comparable widths:


(the error bars show the parameter that determines the width of the transition), the hires data has more relative noise than the unemployment rate (think signal-to-noise ratio). This could potentially make the hires data less useful as an early detector of a recession despite being the leading observable.

With those caveats out of the way, it is possible the hires data might show the beginnings of a recession a several months in advance of the unemployment rate. Like the job openings, it is also showing a negative deviation:


However the most recent data lends support the null hypothesis of no deviation. Regardless, I will continue to monitor this.

Monday, July 10, 2017

Does information equilibrium add information?

I've been asked several times about the utility of the forecasts I make with the information equilibrium (IE) model. I came up with a good way to quantify it for the US 10 year interest rate forecast I made nearly 2 years ago. Here I've compared the IE model (free) to both the Blue Chip Economic Indicators (BCEI, for which the editors charge almost 1500 dollars per year) and a simple ARIMA process:


As you can see, the IE model represents a considerable restriction of the uncertainty relative to the ARIMA model (which is to say it adds bits of information/erases bits of randomness ‒ which explains the bad pun in the title of this post).

Also, I ran the model for the UK (belatedly fulfilling a request from a commenter). I had NGDP and monetary base data up until the start of 2017 so the prediction starts from there. I show both the longer view and the more recent view:



Sunday, July 9, 2017

The labor demand curve

John Cochrane may have written the most concise argument that economic theory should be rejected:
Economic theory also forces logical consistency that would not otherwise be obvious. You can't argue that the labor demand curve is vertical today, for the minimum wage, and horizontal tomorrow, for immigrants. There is one labor demand curve, and it is what it is.
The conclusion should be: Therefore, we must reject the concept of a fixed labor demand curve.

Overall, Cochrane seems to get the concept of science wrong. A few days ago, I noted that Cochrane does not seem to understand what economic theory is for. Now he seems to misunderstand what empirical data is for.

See, the issue is that no one is "arguing" that the labor demand curve is vertical for the minimum wage. No novel theory is being constructed to give us a vertical labor demand curve. The studies are empirical studies. The empirical studies show if there is such a thing as a labor demand curve, it must be vertical for the minimum wage. But really, all the studies show is that raising the minimum wage does not appear to have a disemployment effect. The "Econ 101" approach pictures this as a vertical labor demand curve. And that would be fine on its own.

Likewise, no one is "arguing" that the labor demand curve is horizontal for an influx of workers. Again, no novel theory is being constructed to give us a horizontal labor demand curve. The empirical studies are only showing that an influx of workers does not lower wages. The "Econ 101" approach pictures this as a horizontal labor demand curve. And that would be fine on its own.

But those two results do not exist in a vacuum. If you try to understand both empirical results with a fixed labor demand curve, then your only choice is to reject the fixed curve. You have two experiments. One shows the labor demand curve is horizontal, the other vertical. Something has to give.

*  *  *

Now there is a way to make sense of these results using information equilibrium. Here are several posts on the minimum wage (here, here, and here). The different effects of immigration versus other supply shocks are described in this post. If you are curious, click the links. But the main point is that when we make "Econ 101" arguments, we are making lots of assumptions and therefore restricting the scope of our theory. 

In order to obtain the Econ 101 result for the minimum wage and immigration, you essentially have to make the same specific assumptions (assume the same scope): 1) demand changes slowly with time and 2) supply increases rapidly compared to demand changes. The commensurate scope is the reason why the Econ 101 diagrams are logically consistent. But they're both inconsistent with the empirical data. Therefore we should question the scope. Under different scope conditions (i.e demand and supply change), information equilibrium tells us increasing the minimum wage or increasing immigration increases output ‒ meaning that you should probably accept that demand is changing in both cases. Which is the point of higher minimum wage and pro-immigration arguments: they create economic growth. 

As an aside, I think the a lot of the right-leaning economics might stem from assuming demand changes slowly. The cases I just mentioned made me think of a case where Dierdre McCloskey seems to be assuming demand changes slowly to argue against Thomas Piketty.

In the same sense that while you can obtain an isothermal expansion curve [1] from a restricted scope of an ideal gas (that scope being that temperature is constant, hence isothermal), if your data is inconsistent with the theory you should begin to question the scope (was it really isothermal?). Unfortunately, Econ 101 ‒ and for that matter much of economic theory ‒ does not examine its scope. As Cochrane says: it's about "logic". Logic has no scope. Things are either logical or illogical. That's not now science works. Some descriptions are approximate under particular assumptions (constant temperature, speeds slower than the speed of light) and fail when those assumptions aren't met.

Given empirical data requiring contradictory interpretations of theory (different labor demand curves), a scientific approach would immediately question the scope of the theory being applied. What assumptions did I make to come up with a fixed demand curve? I definitely shouldn't assume studies that contradict the theory are wrong.

...

Footnotes:

[1] Actually, in information equilibrium the Econ 101 demand curve is essentially an isodemand curve (i.e. a curve where demand is held constant/changes slowly) analogous to an isothermal process using the thermodynamic analogies. If I say the minimum wage won't decrease employment because it increases overall demand, the Econ 101 rebuttal is to come back and say "assuming demand doesn't change ...". It'd be kind of funny if it wasn't so perversely in the defense of the powerful.