Believe those who are seeking the truth. Doubt those who find it. Andre Gide

Saturday, December 28, 2013

The most obvious source of cyclical asymmetry is not a nominal rigidity

I've long been interested in the apparent cyclical asymmetry in business fluctuations. So it's nice to see Paul Krugman publicize the issue here: On the Asymmetry of Booms and Slumps. His post, in turn, was motivated this one, by Antonio Fatas: Four Missing Ingredients in Macroeconomic Models. Fatas writes:
1. The business cycle is not symmetric. Most macroeconomic models start with the idea that fluctuations are caused by a succession of events that are both positive and negative (on average they are equal to zero). Not only this is a wrong representation of economic shocks but it also leads to the perception that stabilization policy cannot do much. Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Souther Europe is very close to its actual.
Let me dissect the passage above.

1. "The business cycle is not symmetric." Agreed.
2. "Most macro models assume a symmetric impulse mechanism." Agreed.
3. "Not only is this a wrong representations of economic shocks..."

Not sure what to make of this claim. I think it's sensible to assume that the shocks are symmetric (unless there is compelling evidence to suggest otherwise). The asymmetry in question is more likely to be the byproduct of human interaction -- the economy's propagation mechanism.

4. "...but it also leads to the perception that stabilization policy cannot do much."

I'm also not sure what to make of this statement. Economists know that we cannot make any inferences about the desirability of policy interventions solely on the basis of the statistical properties of time-series data. And in any case, there are plenty of symmetric models suggesting beneficial policy interventions.

5. "If we were to rely on asymmetric models of the business cycle, our views on potential output and the NRU would be radically different."

I'm afraid that Fatas is placing the cart before the horse here. There is no logical basis for that proposition (in fact, I provide a counterexample below): see comment above.

6. "We would not be rewriting history to claim that in 2007 GDP was above potential..."

I hear people make this claim all the time. Typically, they are the same people who claim that the last recession was caused by a bursting asset-price bubble -- of an overheated real estate sector -- of a booming construction (and related) sectors--of over-accumulated capital--and over-accumulated debt. But now, apparently, these same people want to interpret the episode leading up to the crash as the economy just humming along at "potential." Strange.

In any case, on to Krugman's pet idea that asymmetry is explained by DNWR (downward nominal wage rigidity). Maybe there's something to this idea, but my own view is that any such effect is not likely to be very important. Why is this?

I've explained why before here, but let me summarize the argument here. I claim that economists who rely on sticky wage theories are unwitting slaves of Marshall's scissors--static supply and demand curves. If unemployment exists, it must be because reality does not correspond with scissor-intersection: markets do not clear.

But Marshall's scissors are meant to describe what happens in an anonymous spot market for goods like wheat or oil. The labor market is a market for relationships. Relationships are durable. Relationships are a form of capital. We have to move away from Marshall's scissors to understand these relationships (search theory is one way to do this). The economic surplus generated by a productive relationship is divided through a bilateral or multilateral bargaining process that specifies (among other things) how wages are to evolve through time over the life of the relationship. The spot wage (the wage that an econometrician might observe in a data set) plays no allocative role in the relationship. Stickiness in the spot wage does not matter.

That's the theory, anyway. But then, there is also some evidence: Evaluating the Economic Significance of Downward Nominal Wage Rigidity (Michael Elsby) and here: The Effect of Implicit Contracts on the Movement of Wages over the Business Cycle (Beaudry and DiNardo).

Well then, if not a nominal rigidity, what might account for the asymmetry in the unemployment rate?


As it turns out, the sharp rise in unemployment followed by a slow decline follows as a natural property of labor market search models, something that I showed here (the example I alluded to above).

The basic idea is very simple. As I explained above, the labor market is a market for productive relationships. It takes time to build up relationship capital. It takes no time at all to destroy relationship capital. (It takes time to build a nice sandcastle, but an instant for some jerk to kick it down.)

We see the same sort of phenomenon in population dynamics--the so-called "heat wave effect." That is, mortality rates spike up during a spell of bad weather, causing a sudden decline in the population. There is no corresponding spike up in the population during a spell of good weather for obvious reasons (unless you believe in zombies returning suddenly to life).

***

PS. Some related papers where a shock destroys (reshuffles) match capital and takes time to recover: Adaptive Capital, Information Depreciation, and Schumpeterian Growth (Jones and Newman) and Distributional Dynamics Following a Technological Revolution (Andolfatto and Smith). 

Tuesday, December 24, 2013

In gold we trust?

I've written before that a desirable property of a monetary instrument is for it to hold its value over short periods of time (See: Why Gold and Bitcoin Make Lousy Money).

In other words, a good monetary instrument should have a stable short-run rate of return. If I earn some money today, I don't want to see its value decline by 50% tomorrow. If I spend a dollar today, I don't want to see its value rise by 50% tomorrow. Even if these fluctuations cancelled out in the long run, it would be terribly inconvenient and annoying. I'd rather live in a world where my money lost value at a slow but steady rate. Of course, I would not want to store my wealth in the form of such an instrument. But that's not how we store wealth anyway. To store wealth, we can always sell the money we do not need for transaction purposes and purchase other securities.

Now let's take a look at some data -- the type of data Ron Paul likes to use. Let p(t) denote the price-level at date t (I will use the consumer price index). Then 1/p(t) measures the purchasing power of money. If p(t) rises over time (inflation), the purchasing power of money falls over time. And so we have this familiar picture:


I've written about this before here: Ron Paul's Money Illusion.

Now, we can perform the same sort of exercise for gold. Let q(t) denote the USD price of gold at date t. Then the purchasing power of gold is measured by q(t)/p(t). So, if the price of gold rises as fast as the price level, the purchasing power of gold remains constant. If the former rises faster than the latter, then the purchasing power of gold is rising; and vice-versa.

We know that over very long horizons, the rate of return on gold exceeds that of money. But all this says is that gold is a better store of value than cash over long periods of time. (I discuss here whether gold is a good store of value relative to other assets.). How has the purchasing power of gold held up over the last little while?

Here is the purchasing power of gold vs the USD since the beginning of the year:


OK, so this past year was not a good one for gold. If you had earned your wages in gold at the beginning of the year, that gold would now buy you 25% less bread. That's like a tax. And it was not the Fed doing it to you. In fact, if you had instead held on to your USD over same period of time, you would have experienced a much smaller decline in purchasing power.

What if we look at the past 2 years? Here is the picture:


What we see from the picture above is that the purchasing power of gold held up with that of the USD in 2012, but that its short-run rate of return was more volatile. It's rate of return then fell down a  steep hill in 2013.

Let's go back 3 years now:


Gold can't even beat the rate of return on cash over a three-year horizon? That's pretty sad for a store of value.

The main lesson I take away from this is not that people shouldn't invest in gold. By all means, go ahead and invest in all sorts of stuff, including gold. The main lesson is that commodity prices tend to be highly volatile over short periods of time and that this short-run volatility makes them undesirable as payment instruments. There is a better alternative available, and the United States has it in the form of the Federal Reserve.

Happy 100th birthday, Fed!

And a Merry Christmas to all.

PS. My colleague Christian Zimmermann points me to this potentially interesting paper: The Gold Dilemma by Claude Erb and Campbell Harvey.

Saturday, December 14, 2013

Labor Force Participation Gaps (U.S. vs. Canada)

This post is meant as a complement to my earlier posts: [1] Employment Gaps, [2] Employment Slumps in Canada and the U.S., and [3] U.S. Labor Force on Trend?

In what follows, I report the labor force participation rates (LPRs) for Canada and the U.S., for males and females, and across various age groups (1976-2013). Let's take a look first at  prime-age males and females.
 

To the extent that one can consider the Canadian LPR a measure of a common trend (the Canadian recession being less severe than in the U.S.), one might be able to support the idea of a 1-2ppt LPR "gap" for the U.S. 


The behavior of prime-age females across the two countries appears quite similar up until the mid-to-late 1990s. The divergence since then has been quite remarkable. (Has anyone heard of any explanation for why this might be the case?)

Here we have teen-aged males and females. In both cases, we see big gaps emerging some time around 2000.
 


Next we have young males and females. 



And finally, older males and females:



Any comments or suggested references that speak to these patterns would be appreciated. 

Thursday, December 12, 2013

U.S. Labor Force Participation Rate on Trend?

The labor force participation rate (LPR) is defined as the share of the civilian noninstitutionalized that is employed (working) or unemployed (looking for work). In 1970, the U.S. LPR was about 60%. It rose steadily for 30 years, reaching peak of 67.1% in 2000. It has been declining since that time, dropping sharply in the recent recession, and currently sits at around 63%.
 
Question: How much of the recent decline in LPR is due to a bad economy (cyclical factors)? And how much of it might be due to long-term trends associated with changing demographics (structural factors)?

The answer to this question is important for policy because a cyclical interpretation suggests the presence of an undesirable "output gap," whereas a structural interpretation does not.

Christopher Erceg and Andrew Levin have a new paper out which suggests that cyclical factors are responsible (Labor Force Participation and Monetary Policy in the Wake of the Great Recession). Much of their estimate of LPR trend, however, seems to be based on a particular BLS projection. On pages 9-10, they state:
In our view, the labor force projections published by the BLS in November 2007 serve as an invaluable resource in assessing the influence of demographic factors on the subsequent decline in the LFPR. In making such projections, BLS sta¤ consider detailed demographic groups using state-of-the-art statistical procedures in conjunction with micro data from the Current Population Survey (CPS) and various other sources, including interim updates from the U.S. Census Bureau.
But as the following figure demonstrates, BLS projections of trend LPR seem to vary quite a bit over time:

The figure above is drawn from
A Closer Look at the Decline in the Labor Force Participation Rate (Maria Canon, Peter Debbaut, and Marianna Kudlyak). The authors state:
It is tempting to interpret the prerecession projections as reflecting the long-term trend in the LFP rate. However, we observed that the BLS's projections did not necessarily capture the long-term trend; rather, to a substantial degree, they were influenced by the most recent data points. Consequently, this cautions against treating the difference between the actual LFP in 2012 and its BLS projection released in 2007 as entirely due to cyclical factors.
[Note: Erceg and Levin do not rely solely on BLS measures of trend LPR. Much of their empirical work is based on state-level differences in labor market variables.]

It is of some interest to note that this is not the first time policymakers have been interested in the cyclical vs. structural decomposition of LPR. The same questions were being asked nearly a decade ago following a much milder recession (and jobless recovery).

In 2006, economists Stephanie Aaraonson, Bruce Fallick, Andrew Figura, Jonathan Pingle, and William Waacher published this interesting study: The Recent Decline in the Labor Force Participation Rate and Its Implications for Potential Supply.

The authors use a cohort-based model to estimate LPR trend. They state their conclusions as follows:
On balance, the results suggest that most of the decline in the participation rate during and immediately following the 2001 recession was a response to business cycle developments. However, the continued decline in participation in subsequent years and the absence of a significant rebound in 2005 appear to derive from other, more structural factors. Indeed, the participation rate at the end of 2005 was close to our model-based estimate of its longer-run trend level, suggesting that the current state of the labor market is roughly neutral for the participation rate. Finally, projections from the model suggest that many of these structural factors will continue to put downward pressure on the participation rate for some time, so that any future cyclical fluctuations in participation will take place around a declining trend.
The most remarkable picture they produce is, in my view, their Figure 12 (pg. 111):


Their 2006 forecast of the U.S. LPR for 2013 was 63%. Not bad.

Sunday, December 1, 2013

Is QE lowering the rate of inflation?


The answer may be "yes," according to a new paper by Steve Williamson. In examining the effects of a QE experiment in his model economy, he reports the following (p. 16):
Some of the effects here are unconventional. While the decline in nominal bond yields looks like the "monetary easing" associated with an open market purchase, the reduction in real bond yields that comes with this is permanent, and the inflation rate declines permanently. Conventionally-studied channels for monetary easing typically work through temporary declines in real interest rates and increases in the inflation rate. What is going on here? The change in monetary policy that occurs here is a permanent increase in the size of the central bank's holdings of short-maturity government debt - in real terms - which must be balanced by an increase in the real quantity of currency held by the public. To induce people to hold more currency, its return must rise, so the inflation rate must fall. In turn, this produces a negative Fisher effect on nominal bond yields, and real rates fall because of a decline in the quantity of eligible collateral outstanding, i.e. short maturity debt has been transferred from the private sector to the central bank.

Williamson describes these findings on his blog here: Liquidity Premia and the Monetary Policy Trap

Well, it must have been a slow news day on the economics front. The normally mild-mannered Nick Rowe set off a tempest in a teapot when he wondered out loud what "went the hell went wrong with the best and brightest in the profession?"  Nick's little tirade was then picked up by the charming duo of Brad DeLong and Paul Krugman

So what happened? It all started out by Williamson discussing two "fearsome equations" that emerge as theoretical restrictions in a wide class of macroeconomic models. Some people call these restrictions "Fisher equations." I like to think of them as no-arbitrage-conditions.

Let's make some assumptions. There is no uncertainty. We are (the model economy is) in a state-state, so that output grows at the gross rate G. The level of output may be above or below its "natural" level (the level that would prevail if all frictions were absent). Let B denote the discount factor. Absent all frictions, the "natural" rate of interest is given by (G/B).

Let P denote the gross rate of inflation. There are two nominal assets, a bond that yields a gross nominal return R >= 1, and money, which yields a gross nominal return equal to 1. Money is assumed to be more liquid than bonds (bonds cannot be used in a subset of transactions).

The "Fisher equations" that emerge from the model can be written as follows:

[1] R*K = (G/B)*P and [2] 1*L = (G/B)*P

where K and L denote "liquidity premia." In most models, K=1. In this case, the two equations above imply R = L. That is, the liquidity premium on money is equal to the nominal interest rate.

In the "newmonetarist" models that Steve studies, assets apart from money may serve in some manner as exchange media. If financial markets do not work perfectly well (say, because of limited commitment and asymmetric information frictions), then the supply of exchange media may be "scarce." In the present context, this implies K>1, and equations [1] and [2] imply: R*K = L.

The traditional Friedman rule policy implies R = 1, K = L =1, so that P = (B/G). But Steve is assuming the government does not have enough instruments to implement the Friedman rule. In fact, he makes a distinction between the monetary and fiscal authorities. And, as he stresses in his paper (not his blog post), a lot hinges on exactly how one models this relationship.

One scenario that emerges in Steve's model is R =1 and K = L > 1. In this case, the economy is at the ZLB, but government liabilities (cash and bonds -- they are perfect substitutes in this case) exhibit a liquidity premium. On open market operation of cash for bonds in this case has absolutely no effect -- this is the classic liquidity trap -- something that Krugman stressed long ago. From [1] and [2], the equilibrium inflation rate is given by P = (B/G). The equilibrium real rate of interest on government liabilities is 1/P = G/(B*L), which is less than the "natural" real rate of interest (G/B). This is the sense in which the real rate of interest is "too low." (Of course, if you have a different theory of the way the world works, you may be thinking that the real rate of interest is "too high"--but I'm not here to talk about that theory.)

Suppose that the bond we are talking about above is a short-maturity instrument. Imagine that the fiscal authority also issues a long-maturity instrument. Moreover, assume that this long-bond is less liquid than the short-bond (the short-bond is, in present circumstances, viewed as a perfect substitute for cash). Steve then asks what the model implies when the open market operation consists of a swap of cash for the long-bond. In this case, not surprisingly, QE matters. But how does it matter?

The effect of this policy in Williamson's model is to lower the nominal interest rate at the long end of the term structure. Because the Fed is sucking out relatively less liquid assets and replacing them with relatively liquid assets, liquidity premia decline (as one would expect). So, if we take a look at equation [2], we see that the model implies that inflation must decline: P = (B*L)/G. What is the economic intuitions for this? Evidently, one of the effects of QE (in the model) is to increase the real stock of currency held by the private sector, and agents require an increase in currency's rate of return (a fall in the inflation rate) to induce them to hold more currency. (Remember that the results are all contingent on the way monetary and fiscal policy are modeled.)

So this is kind of interesting for a couple of reasons. First, the model offers an explanation for why we do not observe deflation, given that we are at the ZLB (a bit of a puzzle, for conventional theory.) Second, it offers an explanation for how QE may be putting downward pressure on inflation. How quantitatively important these effects are relative to others remains an open question.
 
Krugman and DeLong seem to want to argue that Williamson's results are "incorrect" because the model equilibrium he is focusing on is "unstable." I'm pretty sure I know where they're coming from, but I'm not sure that the criticism applies here.

First, to demonstrate the "stability properties" on an equilibrium, one actually has to go and work out the math. I think it's fair to say that nobody has done that.

Second, what Krugman writes in his "Little Arrows" post is correct, but it is correct only in the context of a particular theory. As I've mentioned before, Peter Howitt demonstrates here how pegging the nominal interest rate is unstable under a wide class of algorithms that govern the manner in which inflation expectations are formed (essentially, adaptive expectations). This led Howitt to argue that stability required a policy to raise interest rates more than one-for-one with inflation expectations. Hence, Howitt came up with the "Taylor principle" before Taylor did.

It is interesting to note, however, that the "stability properties" induced by the Taylor principle in standard New Keynesian models (which embed rational expectations) is something very different. At the opposite extreme, one might take the view that inflation expectations are formed in a manner described here, by Stephanie Schmitt-Grohe and Martin Uribe. Let me reproduce the diagram I used in that blog post here:


As you can see, it is identical to Krugman's "Little Arrows" diagram. The one big difference here is that--under this particular theory of expectations formation--rational expectations--A is unstable and B is stable. So the "little arrows" run in the opposite direction here.

The little circle in the picture above demonstrates how the New Keynesians use the "Taylor principle." Essentially, they restrict attention to trajectories around the steady state point A that never leave that circle. If the Fed follows the Taylor principle, then there is only one point that satisfies this property, and it is point A. Viola-we say that point A is "locally stable" (Yes, I know it sounds weird, but I'm just reporting the facts.) In their Perils of Taylor Rules, Benhabib, Schmitt-Grohe, and Uribe argue that only point B (the liquidity trap) is globally stable. (The fact that Japan has spent decades around a point like B suggests that it may in fact be stable.)

The other thing I'd like to add is that Williamson's results continue to hold even away from the ZLB. So, Krugman's post in particular, which focuses on the properties of Taylor rules (absent in Williamson's model) seems a little off target.

So my interpretation of the criticisms I am hearing of Williamson's paper is that his critics are claiming that he is wrong because his results are inconsistent with the type of models these people are used to working with. It seems to me that the critics should have instead attacked his results and interpretations with empirical facts (or am I too old-fashioned in this regard?). After all, Williamson at least motivated his post with some data (the diagram at the top of this post). And he makes what is potentially a testable prediction (notice the if-then structure of the statement):
In general, if we think that inflation is being driven by the liquidity premium on government debt at the zero lower bound, then if the Fed keeps the interest rate on reserves where it is for an extended period of time, we should expect less inflation rather than more.
I have a little more difficulty in understanding Nick Rowe's objection. Certainly, a part of it seems to be what I just described above. Partly, I think that Nick is disagreeing not with Williamson's model, but with the way Williamson seems to run off at the end of his post with his "the Fed is in a trap" ideas.

And so, this now leads me to my own criticism of Williamson's post.

I wish he had spent a little more time elaborating on this statement he makes:
But the power of monetary policy to mitigate the inefficiency is limited. Basically, it's a fiscal problem. The U.S. government could issue more debt, by temporarily running a higher deficit. But that's not happening, so what can the central bank do about it?
It's a fiscal problem (well, the fundamental problem is limited commitment and asymmetric information in these models). The Treasury could alleviate the "asset shortage" by expanding the supply of Treasury debt! I discussed this idea here some time ago: Not Enough Debt? So isn't this nice? Different models, but similar policy conclusions. Implicitly, Williamson is taking the view that political constraints are preventing this from happening, so let's move on to study Fed policy.

The tone of his post near the end strikes me as odd. He seems rather critical of the way Fed economists generally think about the way monetary policy works. Fair enough. On the other hand, if we read his paper we find the following statement:
QE is a good thing, as purchases of long-maturity government debt by the central bank will always increase the value of the stock of collateralizable wealth.
That is, QE is a good thing in his model economy. In fact, I think his model suggests that the Fed should buy up all outstanding treasury debt (but that even that would not be enough because the problem is the limited supply of the stuff).

So what's his problem? Well, it seems that conventional Fed thinking is that QE is inflationary and, well, as Williamson's paper shows, it may have the opposite effect. O.K., well, so what?

Then Williamson remarks that if the Fed really wants inflation, it should raise its policy rate (IOER). This, of course, is the statement that drew all sorts of criticism when Narayana Kocherlakota suggested something similar a few years back (thanks to Nick Rowe for once again starting that one). Williamson believes that raising the policy rate would be disruptive in the short-run, but that this is the way to achieve higher inflation in the long-run. I am not sure, however, whether his model suggests that higher inflation is a good thing (I don't think so.) These are all positive (not normative) statements.

So, the Fed is "stuck." That is, the Fed seems compelled to continue QE and keep the IOER at 0.25%. Williamson's model seems to suggest this is a good thing. But his model also suggests that the policy is ultimately deflationary (perceived to be a bad thing). The only way to prevent this trajectory is to raise the IOER (another way would be to expand the supply of treasury debt). But doing so will cause a recession because of monetary non-neutralities.

Not sure what any of this has to do with eating more crow though. What would the Fed be doing differently if they took this view? Not much, as far as I can see.

Monday, November 25, 2013

Connecting the Academic and Policy Worlds: Interview with James Bullard

An interview by Economic Dynamics with James Bullard, president of the Federal Reserve Bank of St. Louis (source).

EconomicDynamics Interviews James Bullard on policy and the academic world

James Bullard is President and CEO of the Federal Reserve Bank of St. Louis. His research focuses on learning in macroeconomics. Bullard's RePEc/IDEAS entry.
EconomicDynamics: You have talked about how you want to connect the academic world with the policy world. The research world is already working on some of these questions. Do you have any comments on that?
James Bullard: I have been dissatisfied with the notion that has evolved over the last 25 or 30 years that it was okay to allow a certain group of economists to work on really rigorous models and do the hard work of publishing in journals and then have a separate group that did policymaking and worried about policymaking issues. These two groups often did not talk to each other, and I think that that is a mistake. It is something you would not allow in other fields. If you are going to land a man on Mars, you are going to want the very best engineering. You would not say that the people who are going to do the engineering are not going to talk to the people who are strategizing about how to do the mission. An important part of my agenda is to force discussion between what we know from the research world and the pressing policy problems that we face and try to get the two to interact more. I understand about the benefits of specialization, which is a critical aspect of the world, but still I think it is important that these two groups talk to each other.

ED: Is there a place in policy for the economic models of the "ivory tower"?
JB: I am not one who thinks that the issues discussed in the academic journals are just navel gazing. Those are our core ideas about how the economy works and how to think about the economy. There are no better ideas. That is why they are published in the leading journals. So I do not think you should ignore those. Those ideas should be an integral part of the thinking of any policymaker. I do not think that you should allow policymaking to be based on a sort of second-tier analysis. I think we are too likely to do that in macroeconomics compared to other fields.
ED: Why do you think that is?
JB: I think people have some preconceptions about what they think the best policy is before they ever get down to any analysis about what it might be. I understand people have different opinions, but I see the intellectual market place as the battleground where you hash that out. I do not think the answers are at all obvious. A cursory reading of the literature shows you that there are many, many smart people involved. They have thought hard about the problems that they work on, and they have spent a lot of time even to eke out a little bit of progress on a particular problem. The notion that all those thousands of pages could be summed up in a tweet or something like that is kind of ridiculous. These are difficult issues, and that is why we have a lot of people working on them under some fair amount of pressure to produce results. Sometimes I hear people talking about macroeconomics, and they think it is simple. It is kind of like non-medical researchers saying, "Oh, if I were involved, I would be able to cure cancer." Well fine, you go do that and tell me all about it. But the intellectual challenge is every bit as great in macroeconomics as it is in other fields where you have unsolved problems. The economy is a gigantic system with billions of decisions made every day. How are all these decisions being made? How are all these people reacting to the market forces around them and to the changes in the environment around them? How is policy interacting with all those decisions? That is a hugely difficult problem, and the notion that you could summarize that with a simple wave of the hand is silly.

ED: Do you remember the controversy, the blogosphere discussion, that macroeconomics has been wrong for two decades and all that criticism? Do you have any comments on that?
JB: I think the crisis emboldened people that have been in the wilderness for quite a while. They used the opportunity to come out and say, "All the stuff that we were saying that was not getting published anywhere is all of the sudden right." My characterization of the last 30 years of macroeconomic research is that the Lucas-Prescott-Sargent agenda completely smoked all rivals. They, their co-authors, friends, and students carried the day by insisting on a greatly increased level of rigor, and there was a tremendous amount of just rolling up their sleeves and getting into the hard work of actually writing down more and more difficult problems, solving them, learning from the solution and moving on to the next one. Their victory remade the field and disenfranchised a bunch of people. When the financial crisis came along, some of those people came back into the fray, and that is perfectly okay. But, there is still no substitute for heavy technical analysis to get to the bottom of these issues. There are no simple solutions. You really have to roll up your sleeves and get to work.

ED: What about the criticism?
JB: I think one thing about macroeconomics is that because everyone lives in the economy and they talk to other people who live in the economy, they think that they have really good ideas about how this thing works and what we need to do. I do not begrudge people their opinions, but when you start thinking about it, it is a really complicated problem. I love that about macroeconomics because it provides for an outstanding intellectual challenge and great opportunities for improvement and success. I do not mind working on something that is hard. But everyone does seem to have an opinion. In medicine you do see some of that: People think they know better than the doctors and they think they are going to self-medicate because their theory is the right one, and the doctors do not know what they are doing. Steve Jobs reportedly thought like this when he was sick. But I think you see less of this type of attitude in the medical arena than you do in economics. That is distressing for us macroeconomists, but maybe we can improve that going forward.

ED: What do you think about the criticism of economists not being able to forecast or to see the financial crisis? Do you have any thoughts on that?
JB: One of the main things about becoming a policymaker is the juxtaposition between the role of forecasting and the role of modeling to try to understand how better policy can be made. In the policy world, there is a very strong notion that if we only knew the state of the economy today, it would be a simple matter to decide what the policy should be. The notion is that we do not know the state of the system today, and it is all very uncertain and very hazy whether the economy is improving or getting worse or what is happening. Because of that, the notion goes, we are not sure what the policy setting should be today. So, the idea is that the state of the system is very hard to discern, but the policy problem itself is often disarmingly simple. What is making the policy problem hard is discerning the state of the system. That kind of thinking is one important focus in the policy world. In the research world, it is just the opposite. The typical presumption is that one knows the state of the system at a point in time. There is nothing hazy or difficult about inferring the state of the system in most models. However, the policy problem itself is often viewed as really difficult. It might be the solution to a fairly sophisticated optimization problem that carefully weighs the effects of the policy choice on the incentives of households and firms in a general equilibrium context. That kind of attitude is just the opposite of the way the policy world approaches problems. I have been impressed by this juxtaposition since I have been in this job. Now, forecasting itself I think is overemphasized in the policy world because there probably is an irreducible amount of ambient noise in macroeconomic systems which means that one cannot really forecast all that well even in the best of circumstances. We could imagine two different economies, the first of which has a very good policy and second of which has a very poor policy. In both of these economies it may be equally difficult to forecast. Nevertheless, the first economy by virtue of its much better policy would enjoy much better outcomes for its citizens than the economy that had the worse policy. Ability to forecast does not really have much to do with the process of adopting and maintaining a good policy. The idea that the success of macroeconomics should be based on forecasting is a holdover from an earlier era in macroeconomics, which Lucas crushed. He said the goal of our theorizing about the economy is to understand better what the effects of our policy interventions are, not necessarily to improve our ability to forecast the economy on a quarter-to-quarter or year-to-year basis. What we do want to be able to forecast is the effect of the policy intervention, but in most interesting cases that would be a counterfactual. We cannot just average over past behavior in the economy, which has been based on a previous policy, and then make a coherent prediction about what the new policy is going to bring in terms of consumption and investment and other variables that we care about. It is a different game altogether than the sort of day-to-day forecasting game that goes on in policy circles and financial markets. Of course it is important to try to have as good a forecast as you can have for the economy. It is just that I would not judge success on, say, the mean square error of the forecast. That may be an irreducible number given the ambient noise in the system. One very good reason why we may not be able to reduce the amount of forecast variance is that if we did have a good forecast, that good forecast would itself change the behavior of households, businesses, and investors in the economy. Because of that, we may never see as much improvement as you might hope for on the forecasting side. The bottom line is that better forecasting would be welcome but it is not the ultimate objective. We [central banks] do not really forecast anyway. What we do is we track the economy. Most actual forecasting day to day is really just saying: What is the value of GDP last period or last quarter? What is it this quarter? And what is it going to be next quarter? Beyond that we predict that it will go back to some mean level which is tied down by longer-run expectations. There is not really much in the way of meaningful forecasting about where things are going to go. Not that I would cease to track the economy--I think you should track the economy--but it is not really forecasting in the conventional sense. The bottom line is that improved policy could deliver better outcomes and possibly dramatically better outcomes even in a world in which the forecastable component of real activity is small.

ED: Can the current crisis be blamed on economic modeling?
JB: No. I think that this is being said by people who did not spend a lot of time reading the literature. If you were involved in the literature as I was during the 1990s and 2000s, what I saw was lots of papers about financial frictions, about how financial markets work and how financial markets interact with the economy. It is not an easy matter to study, but I think we did learn a lot from that literature. It is true that that literature was probably not the favorite during this era, but there was certainly plenty going on. Plenty of people did important work during this period, which I think helped us and informed us during the financial crisis on how to think about these matters and where the most important effects might come from. I think there was and continues to be a good body of work on this. If it is not as satisfactory as one might like it to be, that is because these are tough problems and you can only make so much progress at one time. Now, we could think about where the tradeoffs might have been. I do think that there was, in the 1990s in particular, a focus on economic growth as maybe the key phenomenon that we wanted to understand in macroeconomics. There was a lot of theorizing about what drives economic growth via the endogenous growth literature. You could argue that something like that stole resources away from people who might have otherwise been studying financial crises or the interaction of financial systems with the real economy, but I would not give up on those researchers who worked on economic growth. I think that was also a great area to work on, and they were right in some sense that in the long run what you really care about is what is driving long-run economic growth in large developed economies and also in developing economies, where tens of millions of people can be pulled out of poverty if the right policies can be put in place. So to come back later, after the financial crisis, and say, in effect, "Well those guys should not have been working on long-run growth; they should have been working on models of financial crisis," does not make that much sense to me and I do not think it is a valid or even a coherent criticism of the profession as a whole. In most areas where researchers are working, they have definitely thought it through and they have very good ideas about what they are working on and why it may be important in some big macro sense. They are working on that particular area because they think they can make their best marginal contribution on that particular question. That brings me to another related point about research on the interaction between financial markets and the real economy. One might feel it is a very important problem and something that really needs to be worked on, but you also might feel as a researcher, "I am not sure how I can make a contribution here." Maybe some of this occurred during the two decades prior to the financial crisis. On the whole, at least from my vantage point (monetary theory and related literature) I saw many people working on the intersection between financial markets and the real economy. I thought they did make lots of interesting progress during this period. I do think that the financial crisis itself took people by surprise with its magnitude and ferocity. But I do not think it makes sense to then turn around and say that people were working on the wrong things in the macroeconomic research world.

ED: There is a tension between structural models that are built to understand policy and statistical models that focus on forecasting. Do you see irrevocable differences between these two classes of models?
JB: I do not see irrevocable differences because there is no alternative to structural models. We are trying to get policy advice out of the models; at the end of the day, we are going to have to have a structural model. We have learned a lot about how to handle data and how to use statistical techniques for many purposes in the field, and I think those are great advances. These days you see a lot of estimation of DSGE models, so that is a combination of theorizing with notions of fit to the data. I think those are interesting exercises. I do not really see this as being two branches of the literature. There is just one branch of the literature. There may be some different techniques that are used in different circumstances. Used properly, you can learn a lot from purely empirical studies because you can simply characterize the data in various ways and then think about how that characterization of the data would match up with different types of models. I see that process as being one that is helpful. But it has to be viewed in the context that ultimately we want to have a full model that will give you clear and sharp policy advice about how to handle the key decisions that have to be made.

ED: What are policy makers now looking for from the academic modelers?
JB: I have argued that the research effort in the U.S. and around the world in economics needs to be upgraded and needs to be taken more seriously in the aftermath of the crisis. I think we are beyond the point where you can ask one person or a couple of smart people to collaborate on a paper and write something down in 30 pages and make a lot of progress that way. At some point the profession is going to have to get a lot more serious about what needs to be done. You need to have bigger, more elaborate models that have many important features in them, and you need to see how those features interact and understand how policy would affect the entire picture. A lot of what we do in the published literature and in policy analysis is sketch ingenious but small arguments that might be relevant for the big elephant that we cannot really talk about because we do not have a model of the big elephant. So we only talk about aspects of the situation, one aspect at a time. Certainly, being very familiar with research myself and having done it myself, I think that approach makes a great deal of sense. As researchers, we want to focus our attention on problems that can be handled and that one can say something about. That drives a lot of the research. But in the big picture, that is not going to be enough in the medium run or the long run for the nation to get a really clear understanding of how the economy works and how the various policies are affecting the macroeconomic outcomes. We should think more seriously about building larger, better, more encompassing types of models that put a lot of features together so that we can understand the relative magnitudes of various effects that we might think are going on all at the same time. We should also do this within the DSGE context, in which preferences are well specified and the equilibrium is well defined. Therein lies the conflict: to get to big models that are still going to be consistent with micro foundations is a difficult task. In other sciences you would ask for a billion dollars to get something done and to move the needle on a problem like this. We have not done that in economics. We are way too content with our small sketches that we put in our individual research papers. I do not want to denigrate that approach too much because I grew up with that and I love that in some sense, but at some point we should get more serious about this. One reason why this has not happened is that there were attempts in the past (circa 1970) to try to put together big models, and they failed miserably because they did not have the right conceptual foundations about how you would even go about doing this. Because they failed, I think that has made many feel like, "Well, we are not going to try that again." But just because it failed in the past does not mean it is always going to fail. We could do much better than we do in putting larger models together that would be more informative about the effects of various policy actions without compromising on our insistence that our models be consistent with microeconomic behavior and the objects that we study are equilibrium outcomes under the assumptions that we want to make about how the world works.

ED: Can you perhaps talk about some cutting edge research? You have made some points on policy based on cutting edge research.
JB: One of the things that struck me in the research agenda of the last decade or more is the work by Jess Benhabib, Stephanie Schmitt-Grohe and Martin Uribe on what you might think of as a liquidity trap steady state equilibrium which is routinely ignored in most macroeconomic models. But they argue it would be a ubiquitous feature of monetary economies in which policymakers are committed to using Taylor-type rules and in which there is a zero bound on nominal interest rates and a Fisher relation. Those three features are basically in every model. I thought that their analysis could be interpreted as being very general plus you have a really large economy, the Japanese economy, which seems to have been stuck in this steady state for quite a while. That is an example of a piece of research that influenced my thinking about how we should attack policy issues in the aftermath of the crisis. I remain disappointed to this day that we have not seen a larger share of the analysis in monetary policy with this steady state as an integral part of the picture. It seems to me that this steady state is very, very real as far as the industrialized nations are concerned. Much of the thinking in the monetary policy world is that "the U.S. should not become Japan." Yet in actual policy papers it is a rarity to see the steady state included. That brings up another question about policy generally. Benhabib et al. are all about global analysis. A lot of models that we have are essentially localized models that are studying fluctuations in the neighborhood of a particular steady state. There is a fairly rigorous attempt to characterize the particular dynamics around that particular steady state as the economy is hit by shocks and the policymaker reacts in a particular way. There are also discussions of whether the model so constructed provides an appropriate characterization of the data or not, and so on. However, whether the local dynamics observed in the data are exactly the way a particular model is describing them or not is probably not such a critical question compared to the possibility that the system may leave the neighborhood altogether. The economy could diverge to some other part of the outcome space which we are not accustomed to exploring because we have not been thinking about it. Departures of this type may be associated with considerably worse outcomes from a welfare perspective. I have come to feel fairly strongly that a lot of policy advice could be designed and should be designed to prevent that type of an outcome. If the economy is going to stay in a small neighborhood of a given steady state forever, do we really care exactly what the dynamics are within that small neighborhood? The possibility of a major departure from the neighborhood of the steady state equilibrium that one is used to observing gives a different perspective on the nature of 'good policy.' We need to know much more about the question: Are we at risk of leaving the neighborhood of the steady state equilibrium that we are familiar with and going to a much worse outcome, and if we are, what can be done to prevent that sort of global dynamic from taking hold in the economy? I know there has been a lot of good work on robustness issues. Tom Sargent and Lars Hansen have a book on it. There are many others who have also worked on these issues. I think, more than anything, we need perspectives on policy other than just what is exactly the right response to a particular small shock on a particular small neighborhood of the outcome space.

ED: Do you have an example?
JB: I have also been influenced by some recent theoretical studies by Federico Ravenna and Carl Walsh, in part because the New Keynesian literature has had such an important influence on monetary policymakers. A lot of the policy advice has been absorbed from that literature into the policymaking process. I would not say that policymakers follow it exactly, but they certainly are well informed on what the advice would be coming out of that literature. I thought the Ravenna-Walsh study did a good job of trying to get at the question of unemployment and inflation within this framework that so many people like to refer to, including myself on many occasions. They put a rigorous and state-of-the-art version of unemployment search theory into the New Keynesian framework with an eye toward describing optimal policy in terms of both unemployment and inflation. The answer that they got was possibly surprising. The core policy advice that comes out of the model is still price stability--that you really want to maintain inflation close to target, even when you have households in the model that go through spells of unemployment and even though the policymaker is trying to think about how to get the best welfare that you can for the entire population that lives inside the model. The instinct that many might have--that including search-theoretic unemployment in the model explicitly would have to mean that the policymaker would want to "put equal weight" on trying to keep prices stable and trying to mitigate the unemployment friction--turns out to be wrong. Optimal monetary policy is still all about price stability. I think that is important. We are in an era when unemployment has been much higher than what we have been used to in the U.S. It has been coming down, but it is still quite high compared to historical experience in the last few decades. For that reason many are saying that possibly we should put more weight on unemployment when we are thinking about monetary policy. But this is an example of a very carefully done and rigorous piece of theoretical research which can inform the debate, and the message that it leaves is that putting too much weight on unemployment might be actually counterproductive from the point of view of those that live inside the economy because they are going to have to suffer with more price variability than they would prefer, unemployment spells notwithstanding. I thought it was an interesting perspective on the unemployment/inflation question, which is kind of a timeless issue in the macro literature.
References:
Jess Benhabib, Stephanie Schmidt-Grohé and Martin Uribe, 2001. "The Perils of Taylor Rules," Journal of Economic Theory, vol. 96(1-2), pages 40-69, January. James Bullard, 2013. "The Importance of Connecting the Research World with the Policy World," Federal Reserve Bank of St. Louis The Regional Economist, October. James Bullard, 2013. "Some Unpleasant Implications for Unemployment Targeters," presented at the 22nd Annual Hyman P. Minsky Conference in New York, N.Y., April 17. James Bullard, 2010. "Seven Faces of 'The Peril,'" Federal Reserve Bank of St. Louis Review, vol. 92(5), pages 339-52, September/October. James Bullard, 2010. "Panel Discussion: Structural Economic Modeling: Is It Useful in the Policy Process?" presented at the International Research Forum on Monetary Policy in Washington D.C., March 26. Lars Peter Hansen and Thomas Sargent, 2007. Robustness. Princeton University Press. Federico Ravenna and Carl Walsh, 2011. "Welfare-Based Optimal Monetary Policy with Unemployment and Sticky Prices: A Linear-Quadratic Framework," American Economic Journal: Macroeconomics, vol. 3(2), pages 130-62, April.

Tuesday, November 19, 2013

Flatlining in the UK

How bad is the UK recovery dynamic?


Bottomed out -- let's hope so! Can things get any worse?

Here's what real (inflation adjusted) GDP per capita looks like for the U.K. (1992:1 - 2013:2)...


 Because the real GDP is flat, any rise in the nominal GDP is attributable entirely to inflation (increases in the general level of prices). From 1992-1997, the BoE targeted the RPIX inflation rate at 2% per annum. In 1997, the target was raised to 2.5%.

In 2003, the UK switched to targeting CPI inflation at 2% per annum.


So unlike in many other countries, inflation appears to be running at a robust rate. Is this helping, hurting, or innocuous as far as determining real economic activity? (Would like the NGDP targeters to weigh in on this question.)

The following diagram decomposes real GDP (total, not per capita) into consumption (private and public), investment (public and private) and net exports.



So both domestic (real) expenditure components, consumption and investment, took a big hit in the recession. If we take the same data and normalize each series to 100 in 1992, we see that investment grew relatively faster during the boom, and took the bigger hit in the bust.


Now let's break down the (real) expenditure components between the private and public sectors. Again, normalize the levels to 100 in 1992. Here is what consumption looks like:

The big drop seems to be in private consumer spending. Government purchases of consumption goods appears to have held pretty steady through the downturn. What about capital spending? Here, we can only get a breakdown between private and public investment going back to 1997. Government investment is small relative to total investment, but has nevertheless remained elevated relative to private capital spending through most of the sample period:


Note: In April 2005 British Nuclear Fuels plc (BNFL) transferred to the Nuclear Decommissioning Authority (NDA) nuclear reactors that were reaching the ends of their productive lives. BNFL is classified as a public corporation in the National Accounts and the NDA as central government.

In terms of the UK's much publicized austerity measures, the data here suggest that most of any cuts in government spending must have been in the form of reduced transfer payments. Government spending on goods and services seems to have held up relatively well throughout the contraction in economic activity.

Thursday, November 14, 2013

Andrew Huszar: Confessions of a Quantitative Easer

Former Fed employee, Andrew Huszar, lays into the Fed here: Confessions of a Quantitative Easer. His opening salvo is a doozy:
I can only say: I'm sorry, America. As a former Federal Reserve official, I was responsible for executing the centerpiece program of the Fed's first plunge into the bond-buying experiment known as quantitative easing. The central bank continues to spin QE as a tool for helping Main Street. But I've come to recognize the program for what it really is: the greatest backdoor Wall Street bailout of all time.
What supports his claim that QE is a "bailout" for Wall Street? The fact that stock prices have risen. Goodness. Was he hoping instead that the Fed's QE program might have caused asset prices to plunge?

Perhaps not. But what about "Main Street?"
Despite the Fed's rhetoric, my program wasn't helping to make credit any more accessible for the average American. The banks were only issuing fewer and fewer loans. More insidiously, whatever credit they were extending wasn't getting much cheaper. QE may have been driving down the wholesale cost for banks to make loans, but Wall Street was pocketing most of the extra cash.
What justifies this claim? He doesn't really say. He doesn't really need to. Everyone who wants to believe this already knows it is true. And yet, inconveniently, we have the evidence:


I love the contradictions that emerge from his ill-thought-out diatribe. On the one hand, he claims that QE has had a marginal (but positive) impact on the real economy. But on the other hand, he suggests that QE has averted (postponed) an economic disaster -- a situation that would have forced our policymakers to confront the real structural problems that beset this great nation.

Here is Mr. Huzsar on CNBC, where he appears to backtrack a bit. And for good reason: Melissa Lee dismantles him immediately with facts that contradict his argument. Most of his discourse is a babbling brook of incoherence. What is the man saying? What is his point?

At its most basic level, QE is simple to understand in terms of its motivation and its operation. To begin, it's not about printing money and injecting it as "gifts" or "bailouts" to various agents in the economy. The Fed is legally prohibited from such activites (which lie in the domain of fiscal policy).

All the Fed is permitted to do with the new money it creates is to buy securities--mainly government securities, but recently also agency debt (mortgage backed securities issued by Fannie and Freddie). Agency debt currently yields about 3%. Fed paper yields (1/4)% or less. The Fed makes a profit on the interest rate differential. It remits this profit to the U.S. taxpayer (remittances have hit record levels in recent years).

The purpose of printing money to buy agency (and other) debt is to drive up the price of these instruments--equivalently, to drive down their yields. Savers who have government bonds and other securities in their wealth portfolios experience capital gains as interest rates fall. Homeowners refinance their mortgages at lower rates, releasing purchasing power for other purposes. Lower interest rates will hopefully stimulate capital (and other forms of) spending. That's the basic idea.

How well has it worked? The effects have likely been modestly positive. But nobody knows for sure. What are the costs? I am hard pressed to identify immediate costs. Huszar suggests that one cost has been to divert attention away the real structural problems that need to be fixed. I agree with this sentiment, but disagree that it has anything to do with QE per se. It has more to do with the general belief that monetary policy can fix the problems at hand. There may, of course, be future costs to contend with, like future inflation. But inflation and inflation expectations remain low and anchored.

I'm not sure what Mr. Huszar was expecting when he took his "dream job." What did he expect a bond buying program to entail? What would he have done differently and why?  And as for his apology, I'll take it more seriously when I see him return his salary to the American people.

Monday, November 11, 2013

QE in Japan: Past and Present

Japanese Prime Minister Shinzo Abe
One of PM Shizo Abe's "three arrows" of economic stimulus entails a massive "monetary stimulus" designed to slay Japan's persistently moderate deflation.

This is the second time in the last decade that Japan has experimented with QE (quantitative easing). How did the experiment work out in the past? And is there any reason to believe that the outcome will be different this time around?

Let's begin by taking a look at the supply of base money in Japan (Jan 1980 - Oct 13).


The first QE program started in March 2001 and ended in 5 years later in March 2006. The second QE program is evident from the chart.

According to this source, the original QE program had four goals: (1) stabilize the banking sector; (2) lower long-term interest rates; (3) increase inflation expectations; and (4) stimulate bank lending. Evidently, the program had some success with (1) and (2), but failed with (3) and (4).

Here is how core inflation behaved in Japan over the period 1992-2012:


So basically just a moderate deflation since 2000. Is this a bad thing? The conventional wisdom seems to think so. For example, here is Barry Eichengreen on the subject:
Recall that deflation wreaks its damage by discouraging spending – investment spending in particular. No one questions, therefore, that putting Japanese prices on a gradual upward trend is needed to encourage growth.
Hmm, I find these to be rather odd statements, especially from an excellent economic historian. Theoretically, it is doubtful that a moderate expected deflation (or inflation) is really that harmful (it's the large unanticipated swings that potentially hurt). Here is some work by another set of fine economic historians on the subject: Good vs Bad Deflation: Lessons from the Gold Standard Era.

But never mind Gold Standard eras. What about Japan? As I've pointed out here, Japan actually experienced a robust boom in private investment spending from 2002-2008 (as part of the so-called Koizuma boom). So I'm not entirely sure what Eichengreen is on about here.

Let me reproduce my chart for real GDP in Japan:


To my eye, it looks like Japan was basically getting back on track after the interruption of the Asian financial crisis in 1997. In fact there are signs of accelerating growth in the two years leading up to the 2008 financial crisis. Did Japan's QE policy have anything to do with the Koizuma boom? I can hardly see how. The massive injection of cash was removed in 2006 with no noticeable impact on real economic activity (or inflation, for that matter).

Why didn't the original QE have an impact on inflation? We could talk all day about this. Let's start by looking at a broader measure of money: M2 (currency in circulation plus bank deposit liabilities). 


Bank liabilities are created whenever a bank makes a new loan (the liabilities are destroyed whenever a bank loan is repaid). Because bank liabilities are used widely in making payments, they are money. Thus, the red line in the figure above -- the growth rate in M2 -- largely captures the growth rate in bank lending activity. As you can see, the growth rate of M2 is much lower and much more stable than the growth rate in the money base.

To a first approximation, it seems that the effect of QE is on bank reserves and not on currency in circulation/bank lending (sound familiar?). Here is the money multiplier (M2 divided by base money) in Japan:


But on the other hand, maybe this time is a bit different; at least, in terms of inflation expectations. Here are some market-based measures of inflation expectations in Japan (based on the expectations implied by comparing the yields on nominal Japanese government bonds and their inflation-protected counterparts at various maturities).


Here, we only have the 10-year inflation expectation going back to 2004 (it ends some time in 2008 and reappears right at the end of the sample there at about 1%). I've plotted all available maturities here to give us the broad picture. As with the U.S., inflation expectations took a dive during financial crisis (see here). While inflation expectations have been trending upward since before Abe took office, it is notable that they have continued to climb significantly past 1%.

Here is a plot of the expected real interest rate on Japanese government bonds at different maturities:


So it appears that Abeconomics has "succeeded" in driving the real interest into negative territory. I suppose this is a good thing if for some reason the market "wants" negative real rates, but is prevented from achieving them owing to the zero lower bound on nominal interest rates.

But the deeper question is: Why do real rates want to be so low? And why should we  expect a resumption of "normal" economic activity once these negative real rates have been achieved?

Data source for Japanese inflation expectations: Bloomberg

Tuesday, October 22, 2013

Employment slumps in Canada and the U.S.

Some time ago I wrote about the prospect of the U.S. economy going through a Canadian-style slump (see here). To summarize: The recession that hit Canada and the U.S. in the early 1990s was much more severe in Canada than in the U.S., and the recovery in Canada took almost a decade to complete. In 2008, the tables appear turned. In what follows, I plot the employment-to-population ratio for Canada from 1989:1 - 2003:1 and match it up against the same ratio for the U.S. beginning in 2007:1 - present. The parallels thus far are striking.

Let's start with the employment ratios (courtesy of my able research assistant, Li Li) for the whole population in both countries: (All starting points normalized to 100 -- the actual employment rates are close in any case.)

This shows that the slump, as measured by the drop in employment, was about the same magnitude for Canada in 1990-91 as for the United States in 2008-09. The recovery dynamic in both cases appears to be painfully slow.

Let's now decompose employment across various age groups.





In terms of young and prime-age workers, the U.S. looks a little more depressed relative to the Canadian experience. The experience of older U.S. workers seems less depressed (but the behavior of older workers since the mid 1990s is influenced by a change in secular dynamics, so perhaps should not be viewed as a recovery dynamic.)

Now let's decompose by age and sex. Here we have the data for adult men:


And here we have the age-sex decomposition for men:






The correspondence between those aged 20-55 (the bulk of the population) is very close. Here is the data for adult females:


And here is the age-sex decomposition for women:






The most recent U.S. recession is sometimes labeled a "mancession" in reference to the fact that men appear to have been particularly hard hit (my colleague Silvio Contessi and my RA Li Li talk a bit about this phenomenon here.) It is interesting to note that while this may have been the case, the data here suggest that U.S. females were nevertheless hit harder than their Canadian counterparts in the 1990s.

Just for fun, I asked Li Li to plot broad stock market indices: the TSX composite index for Canada and the S&P 500 for the U.S. (both series have been adjusted for inflation).


Anyone willing to bet against the EMH?

At this point, I'm not entirely sure how to interpret this data. My feeling is that something useful may come out of studying the Canadian episode in greater detail. Maybe a few Ph.D. students are willing to take up the challenge?