Does Lower-for-Longer Work?
The Fed’s new policy framework, as I outlined in a previous post, seems on the surface to be a quasi-average-inflation-targeting approach. Basically, in its “Statement of Longer-Run Goals…”, the FOMC seems to have committed itself to a policy whereby it will make up for past misses of its inflation target. But, as I pointed out, the Statement is not much of a commitment, as by design it’s vague. Further, the asymmetric specification gives it away. The FOMC says that it will tolerate inflation above 2% for some time after a period of below-target inflation, but says nothing about what happens if inflation were to run persistently above target (not to mention other shortcomings). As such, the policy framework appears to look more like a typical lower-for-longer approach, which has been around in the academic literature for going-on 20 years or so.
How does lower-for-longer work, allegedly? The idea is that, in a Keynesian world, the zero lower bound (or effective lower bound, ELB) on the nominal interest rate can constrain monetary policy, and the constraint will more frequently bind given persistent low real rates of interest. But, in states where the ELB binds, the central bank “has other tools,” as central bankers like to say. Those other tools are quantitative easing (QE), and forward guidance. Lower-for-longer is forward guidance. Forward guidance is just a particular example of, potentially, the benefits that can be had from well-understood monetary policy rules.
In an ELB episode the claim is that lower-for-longer works through a commitment to higher inflation in the future (after the shock that induced the ELB episode dissipates) than would be the case if the central bank arrived at the future date without such a commitment. That higher anticipated future inflation feeds back to the present. The higher inflation today lowers the real interest rate today, which causes intertemporal substitution, that is consumers move expenditure from the future to the present, and this increases demand-determined output in this Keynesian world.
So that seems to be what the FOMC has in mind. But lower-for-longer is a policy intended specifically for ELB episodes, while the FOMC seems particularly focused on the current ELB episode. So why would they write that approach into a statement about the FOMC’s longer-run goals? I can think of reasons, but those reasons have more to do with bureaucracies and how they work, rather than good policy or economic science.
In any case, where did the FOMC get the idea? You could start with a piece by Ben Bernanke, written in 2007. Bernanke argued for a type of temporary price level targeting approach. That is, the FOMC would operate in two modes: ELB policy and normal policy. ELB policy would be lower-for-longer, and normal policy would be standard inflation targeting, with no makeups. For Bernanke, price level targeting at the ELB looks like lower-for-longer. If inflation tends to be below target at the ELB, then a temporary price level target implies that you’ll have to make up the undershoot with an overshoot of the inflation target in the future.
One issue is that lower-for-longer policies need a number of features in place to work. The central bank needs to be capable of committing, it has to have the intention of following through on the commitment, and it has to know how to meet the commitment. In the FOMC’s case, they didn’t get past the commitment part, as the new framework seems to leave the door wide open to discretion, and their intentions are unclear. Further, there’s nothing in the current public statements of Fed officials that suggests their ideas about inflation control are any different from those of other central bankers in the world. And, generally, there’s been a tendency in the world since the global financial crisis for inflation target undershooting. And the central banks that fall short of 2% inflation seem to want 2% inflation, so maybe they don’t understand how inflation control works.
But what does the theory say? Bernanke’s piece from 2017 cites some of the literature on forward guidance, which includes early work by Krugman and Eggertsson and Woodford. Post-financial crisis, the key work is by Werning, in a New Keynesian framework, and the HANK models of Hagedorn and coauthors, McKay and coauthors, and Feiveson and coauthors. The latter is a paper written by Board of Governors economists, and presented at an FOMC meeting, in part to provide support for the Fed’s new policy framework.
Basically, these models work in much the same way, though the HANK models include rich distributional effects. But policy is as as usual in New Keynesian models. There’s no central bank assets and liabilities, and monetary policy is about saying what the short-run nominal interest rate is. In Keynesian fashion, there are sticky prices and maybe sticky wages too.
A typical approach is to specify a temporary “demand shock” which is an increase in the natural rate of interest - presumably or explicitly an increase in the subjective discount factor that will imply that hitting the inflation target is temporarily not feasible. This contagious increase in patience implies that the ELB is optimal, at least for the period when patience is high. Further, it’s known by everyone in this model world that the discount factor drops back to some normal level forever, at a known future date, T.
So, what happens? These are typically environments with perfect certainty where we’re thinking about a single, temporary event, and how to respond to it. A key feature, which all these approaches have in common, is the assumption that “normalization” happens at some future date beyond T. Here’s footnote 5 in Werning’s paper:
“Although this seems like a natural assumption, it presumes that the central bank somehow overcomes the indeterminacy of equilibria that plagues these models. Usually this can be accomplished, for example, by adherence to a Taylor rule, with appropriate coefficients. However, following such a rule requires commitment, off the equilibrium path, which is not possible here…”
If you’re not familiar with the issues, here’s what he’s talking about. Suppose r* is the long-run real interest rate, or natural rate, when time t > T, and let i* denote the inflation target. What Benhabib and coauthors showed is that there is a wide class of models, for which we can work out exactly the global dynamics, that have multiple equilibria, even under aggressive Taylor rules, which are typically claimed to induce determinacy in New Keynesian models. Are the New Keynesians wrong? No, not quite, but they’re being misleading. We don’t know the global dynamics in basic New Keynesian models, and certainly not in HANK models, and New Keynesians are relying on local determinacy results.
But what the work of Benhabib and coauthors suggest is that it’s likely that all these models have the same indeterminacy issue. Here’s what it is. Under an aggressive Taylor rule (the one Taylor recommended), there are two steady states, one where the nominal interest rate is R* = i*+r* and one where the nominal interest rate is zero and the inflation rate is i = -r*. As well, there are many non-steady-state equilibria that converge to the zero lower bound (ZLB) steady state. That is, the desired steady state where the central bank achieves its inflation target is unstable, and the ZLB steady state is stable. That is, the aggressive Taylor rule doesn’t solve the indeterminacy problem, it makes it worse.
That’s the Taylor rule perils problem. It happens because, under an aggressive Taylor rule, a central banker who sees inflation below target cuts the nominal interest rate aggressively, but the central banker did not take Fisher effects into account. What happens is that the net effect of the interest rate cut is to reduce inflation, inducing further interest rate cuts, until the central banker arrives at the ZLB, and stays there forever, unless the policy rule changes.
But, in the papers I referred to above, on forward guidance, the authors are content to assume that the central banker reverts to R* at some future date after the ELB episode ends. That is, there is normalization, whereby the central bank reverts to the “neutral rate” consistent with the inflation target and the natural rate of interest. That’s important. At some future date, in these exercises, there are interest rate hikes, inflation will be at target, and working backward, that will be key to how the lower-for-longer policy works. But it doesn’t work if everyone believes that the normalization does not happen.
And, in fact, many properties of these forward guidance exercises, which may seem counterintuitive, flow from the assumption that normalization occurs at some future date. Basically, this is assuming a terminal value for inflation. These dynamic systems tend to converge when we solve them forward for some arbitrary initial condition. But, for example, if you have a difference equation that converges to a unique steady state value when you solve it forward, it blows up when you solve it backward. Basically that’s what’s being done in these exercises. Forward guidance has large effects on current variables. That’s the “forward guidance puzzle.” In part, it’s puzzling because, the farther in the future the forward guidance applies, the bigger the effects. If the Fed commits itself to a policy action 100 years in the future, the effect is enormous, but committing to a policy for next quarter has comparatively small effects. You might say the puzzle is why people keeping doing the exercise that way. Generally the reaction of researchers is to tweak the model to try to get the puzzle to go away. For example that’s one of the things that Hagedorn and coauthors show. In particular, the size of the forward guidance effect is highly dependent on distribution effects.
But what does this have to do with practical policy matters? Taylor rule perils can explain why central banks have tended to undershoot inflation targets since the financial crisis. Why have some central banks, like the Bank of Canada, been successful, more or less, in achieving their inflation targets? Maybe they don’t adhere to a Taylor rule. For example the BoC tends to respond more to the unemployment rate than the inflation rate. Given the sluggishness in inflation, the BoC is able to keep inflation (mostly) in a 1%-3% range, as long as the deviations from the neutral rate of interest are not too persistent, and the average nominal interest rate is roughly R*.
And here’s the problem with the Fed’s current policy and forward guidance. From the April FOMC meeting:
“The Committee seeks to achieve maximum employment and inflation at the rate of 2 percent over the longer run. With inflation running persistently below this longer-run goal, the Committee will aim to achieve inflation moderately above 2 percent for some time so that inflation averages 2 percent over time and longer‑term inflation expectations remain well anchored at 2 percent. The Committee expects to maintain an accommodative stance of monetary policy until these outcomes are achieved.”
So, that’s the lower-for-longer strategy. “Accommodative stance” seems to include a fed funds target range of 0%-0.25%. But what happens if the conditions for a path to normalization are never observed. And, of course, whether those conditions are observed or not depends on the policy. But if the FOMC is actually committed to its forward guidance, theory and evidence suggests they’re going to be stuck in a policy trap. That’s what Japan looks like. A central bank nominal interest rate target at essentially zero, for about 26 years now, inflation that averages about zero, and people who expect inflation at about zero forever.
The key assumption the FOMC is relying on is that nominal interest rates at zero for a long time will eventually make inflation go up. But, as I’ve argued, that will happen only if people expect that there is normalization at the end of the rainbow. If the Fed has removed all the excuses it has to normalize, it’s not likely to happen. There’s widespread concern about high inflation now, but I think that concern is misplaced. Any big price increases we’re seeing are some combination of base-period effects and temporary supply chain bottlenecks that are likely to be reversed. No need to worry I think. The likelihood is that the Fed’s attempt to boost inflation above 2% will just lead to inflation short of 2%, because the FOMC cannot bring itself to normalize. And that’s OK, but not for the Fed people, if they’re honest about it.
But here’s a more serious problem. What the FOMC seems most concerned about is the state of the labor market over the next few years, and the implications that has for economic welfare. But that’s shortsighted. Given their apparent dedication to stabilization policy you might think they would be concerned with setting themselves up for the next recession. Stabilization policy is about smoothing, and the effects of monetary policy on real economic activity are temporary. The accommodative effects of a given interest rate cut eventually disappear. So, reasonably timely normalization is important. There may be costs to hiking rates, but the costs are higher if another recession comes and normalization hasn’t happened yet. The Fed can’t send rates down without first increasing them. But people may say that central banks have many tools. Actually I don’t think so. The key things a central bank does is to set a target for a short-term nominal interest rate, and do crisis intervention. QE, forward guidance?Overrated I think.