Is Finance Parasitic?

We all know what a parasite is: an organism that attaches itself to another organism and derives nourishment from the host organism and in so doing weakens the host possibly making the host unviable and thereby undermining its own existence. Ayn Rand and her all too numerous acolytes were and remain obsessed with parasitism, considering every form of voluntary charity and especially government assistance to the poor and needy a form of parasitism whereby the undeserving weak live off of and sap the strength and the industry of their betters: the able, the productive, and the creative.

In earlier posts, I have observed that a lot of what the financial industry does is not really productive of net benefits to society, the gains of some coming at the expense of others. This insight was developed by Jack Hirshleifer in his classic 1971 paper “The Private and Social Value of Information and the Reward to Inventive Activity.” Financial trading to a large extent involves nothing but the exchange of existing assets, real or financial, and the profit made by one trader is largely at the expense of the other party to the trade. Because the potential gain to one side of the transaction exceeds the net gain to society, there is a substantial incentive to devote resources to gaining any small, and transient informational advantage that can help a trader buy or sell at the right time, making a profit at the expense of another. The social benefit from these valuable, but minimal and transitory, informational advantages is far less than the value of the resources devoted to obtaining those informational advantages. Thus, much of what the financial sector is doing just drains resources from the rest of society, resource that could be put to far better and more productive use in other sectors of the economy.

So I was really interested to see Timothy Taylor’s recent blog post about Luigi Zingales’s Presidential Address to the American Finance Association in which Zingales, professor of Finance at the University of Chicago Business School, lectured his colleagues about taking a detached and objective position about the financial industry rather than acting as cheer-leaders for the industry, as, he believes, they have been all too inclined to do. Rather than discussing the incentive of the financial industry to over-invest in research in search of transient informational advantages that can be exploited, or to invest in billions in high-frequency trading cables to make transient informational advantages more readily exploitable, Zingales mentions a number of other ways that the finance industry uses information advantages to profit at expense of the rest of society.

A couple of xamples from Zingales.

Financial innovations. Every new product introduced by the financial industry is better understood by the supplier than the customer or client. How many clients or customers have been warned about the latent defects or risks in the products or instruments that they are buying? The doctrine of caveat emptor almost always applies, especially because the customers and clients are often considered to be informed and sophisticated. Informed and sophisticated? Perhaps, but that still doesn’t mean that there is no information asymmetry between such customers and the financial institution that creates financial innovations with the specific intent of exploiting the resulting informational advantage it gains over its clients.

As Zingales points out, we understand that doctors often exploit the informational asymmetry that they enjoy over their patients by overtreating, overmedicating, and overtesting their patients. They do so, notwithstanding the ethical obligations that they have sworn to observe when they become doctors. Are we to assume that the bankers and investment bankers and their cohorts in the financial industry, who have not sworn to uphold even minimal ethical standards, are any less inclined than doctors to exploit informational asymmetries that are no less extreme than those that exist between doctors and patients?

Another example. Payday loans are a routine part of life for many low-income people who live from paycheck to paycheck, and are in constant danger of being drawn into a downward spiral of overindebtedness, rising interest costs and financial ruin. Zingales points out that the ruinous effects of payday loans might be mitigated if borrowers chose installment loans instead of loans due in full at maturity. Unsophisticated borrowers seems to prefer single-repayment loans even though such loans in practice are more likely to lead to disaster than installment loans. Because total interest paid is greater under single payment loans, the payday-loan industry resists legislation requiring that payday loans be installment loans. Such legislation has been enacted in Colorado with favorable results. Zingales sums up the results of recent research about payday loans.

Given such a drastic reduction in fees paid to lenders, it is entirely relevant to consider what happened to the payday lending supply In fact, supply of loans increased. The explanation relies upon the elimination of two inefficiencies. First, less bankruptcies. Second, the reduction of excessive entry in the sector. Half of Colorado’s stores closed in the three years following the reform, but each remaining stores served 80 percent more customers, with no evidence of a reduced access to funds. This result is consistent with Avery and Samolyk (2010), who find that states with no rate limits tend to have more payday loan stores per capita. In other words, when payday lenders can charge very high rates, too many lenders enter the sector, reducing the profitability of each one of them. Similar to the real estate brokers, in the presence of free entry, the possibility of charging abnormal profit margins lead to too many firms in the industry, each operating below the optimal scale (Flannery and Samolyk, 2007), and thus making only normal profits. Interestingly, the efficient outcome cannot be achieved without mandatory regulation. Customers who are charged the very high rates do not fully appreciate that the cost is higher than if they were in a loan product which does not induce the spiral of unnecessary loan float and thus higher default. In the presence of this distortion, lenders find the opportunity to charge very high fees to be irresistible, a form of catering products to profit from cognitive limitations of the customers (Campbell, 2006). Hence, the payday loan industry has excessive entry and firms operating below the efficient scale. Competition alone will not fix the problem, in fact it might make it worse, because payday lenders will compete in finding more sophisticated ways to charge very high fees to naïve customers, exacerbating both the over-borrowing and the excessive entry. Competition works only if we restrict the dimension in which competition takes place: if unsecured lending to lower income people can take place only in the form of installment loans, competition will lower the cost of these loans.

One more example of my own. A favorite tactic of the credit-card industry is to offer customers zero-interest rate loans on transferred balances. Now you might think that banks were competing hard to drive down the excessive cost of borrowing incurred by many credit card holders for whom borrowing via their credit card is their best way of obtaining unsecured credit. But you would be wrong. Credit-card issuers offer the zero-interest loans because, a) they typically charge a 3 or 4 percent service charge off the top, and b) then include a $35 penalty for a late payment, and then c), under the fine print of the loan agreement, terminate the promotional rate on the transferred balance, increasing the interest rate on the transferred balance to some exorbitant level in the range of 20 to 30 percent. Most customers, especially if they haven’t tried a balance-transfer before, will not even read the fine print to know that a single late payment will result in a penalty and loss of the promotional rate. But even if they are aware of the fine print, they will almost certainly underestimate the likelihood that they will sooner or later miss an installment-payment deadline. I don’t know whether any studies have looked into the profitability of promotional rates for credit card issuers, but I suspect, given how widespread such offers are, that they are very profitable for credit-card issuers. Information asymmetry strikes again.

A New Paper on the Short, But Sweet, 1933 Recovery Confirms that Hawtrey and Cassel Got it Right

In a recent post, the indispensable Marcus Nunes drew my attention to a working paper by Andrew Jalil of Occidental College and Gisela Rua of the Federal Reserve Board. The paper is called “Inflation Expectations and Recovery from the Depression in 1933: Evidence from the Narrative Record. “ Subsequently I noticed that Mark Thoma had also posted the abstract on his blog.

 Here’s the abstract:

This paper uses the historical narrative record to determine whether inflation expectations shifted during the second quarter of 1933, precisely as the recovery from the Great Depression took hold. First, by examining the historical news record and the forecasts of contemporary business analysts, we show that inflation expectations increased dramatically. Second, using an event-studies approach, we identify the impact on financial markets of the key events that shifted inflation expectations. Third, we gather new evidence—both quantitative and narrative—that indicates that the shift in inflation expectations played a causal role in stimulating the recovery.

There’s a lot of new and interesting stuff in this paper even though the basic narrative framework goes back almost 80 years to the discussion of the 1933 recovery in Hawtrey’s Trade Depression and The Way Out. The paper highlights the importance of rising inflation (or price-level) expectations in generating the recovery, which started within a few weeks of FDR’s inauguration in March 1933. In the absence of direct measures of inflation expectations, such as breakeven TIPS spreads, that are now available, or surveys of consumer and business expectations, Jalil and Rua document the sudden and sharp shift in expectations in three different ways.

First, they show document that there was a sharp spike in news coverage of inflation in April 1933. Second, they show an expectational shift toward inflation by a close analysis of the economic reporting and commentary in the Economist and in Business Week, providing a fascinating account of the evolution of FDR’s thinking and how his economic policy was assessed in the period between the election in November 1932 and April 1933 when the gold standard was suspended. Just before the election, the Economist observed

No well-informed man in Wall Street expects the outcome of the election to make much real difference in business prospects, the argument being that while politicians may do something to bring on a trade slump, they can do nothing to change a depression into prosperity (October 29, 1932)

 On April 22, 1933, just after FDR took the US of the gold standard, the Economist commented

As usual, Wall Street has interpreted the policy of the Washington Administration with uncanny accuracy. For a week or so before President Roosevelt announced his abandonment of the gold standard, Wall Street was “talking inflation.”

 A third indication of increasing inflation is drawn from the five independent economic forecasters which all began predicting inflation — some sooner than others  — during the April-May time frame.

Jalil and Rua extend the important work of Daniel Nelson whose 1991 paper “Was the Deflation of 1929-30 Anticipated? The Monetary Regime as Viewed by the Business Press” showed that the 1929-30 downturn coincided with a sharp drop in price level expectations, providing powerful support for the Hawtrey-Cassel interpretation of the onset of the Great Depression.

Besides persuasive evidence from multiple sources that inflation expectations shifted in the spring of 1933, Jalil and Rua identify 5 key events or news shocks that focused attention on a changing policy environment that would lead to rising prices.

1 Abandonment of the Gold Standard and a Pledge by FDR to Raise Prices (April 19)

2 Passage of the Thomas Inflation Amendment to the Farm Relief Bill by the Senate (April 28)

3 Announcement of Open Market Operations (May 24)

4 Announcement that the Gold Clause Would Be Repealed and a Reduction in the New York Fed’s Rediscount Rate (May 26)

5 FDR’s Message to the World Economic Conference Calling for Restoration of the 1926 Price Level (June 19)

Jalil and Rua perform an event study and find that stock prices rose significantly and the dollar depreciated against gold and pound sterling after each of these news shocks. They also discuss the macreconomic effects of shift in inflation expectations, showing that a standard macro model cannot account for the rapid 1933 recovery. Further, they scrutinize the claim by Friedman and Schwartz in their Monetary History of the United States that, based on the lack of evidence of any substantial increase in the quantity of money, “the economic recovery in the half-year after the panic owed nothing to monetary expansion.” Friedman and Schwartz note that, given the increase in prices and the more rapid increase in output, the velocity of circulation must have increased, without mentioning the role of rising inflation expectations in reducing that amount of cash (relative to income) that people wanted to hold.

Jalil and Rua also offer a very insightful explanation for the remarkably rapid recovery in the April-July period, suggesting that the commitment to raise prices back to their 1926 levels encouraged businesses to hasten their responses to the prospect of rising prices, because prices would stop rising after they reached their target level.

The literature on price-level targeting has shown that, relative to inflation targeting, this policy choice has the advantage of removing more uncertainty in terms of the future level of prices. Under price-level targeting, inflation depends on the relationship between the current price level and its target. Inflation expectations will be higher the lower is the current price level. Thus, Roosevelt’s commitment to a price-level target caused market participants to expect inflation until prices were back at that higher set target.

A few further comments before closing. Jalil and Rua have a brief discussion of whether other factors besides increasing inflation expectations could account for the rapid recovery. The only factor that they mention as an alternative is exit from the gold standard. This discussion is somewhat puzzling inasmuch as they already noted that exit from the gold standard was one of five news shocks (and by all odds the important one) in causing the increase in inflation expectations. They go on to point out that no other country that left the gold standard during the Great Depression experienced anywhere near as rapid a recovery as did the US. Because international trade accounted for a relatively small share of the US economy, they argue that the stimulus to production by US producers of tradable goods from a depreciating dollar would not have been all that great. But that just shows that the macroeconomic significance of abandoning the gold standard was not in shifting the real exchange rate, but in raising the price level. The fact that the US recovery after leaving the gold standard was so much more powerful than it was in other countries is because, at least for a short time, the US sought to use monetary policy aggressively to raise prices, while other countries were content merely to stop the deflation that the gold standard had inflicted on them, but made no attempt to reverse the deflation that had already occurred.

Jalil and Rua conclude with a discussion of possible explanations for why the April-July recovery seemed to peter out suddenly at the end of July. They offer two possible explanations. First passage of the National Industrial Recovery Act in July was a negative supply shock, and second the rapid recovery between April and July persuaded FDR that further inflation was no longer necessary, with actual inflation and expected inflation both subsiding as a result. These are obviously not competing explanations. Indeed the NIRA may have itself been another reason why FDR no longer felt inflation was necessary, as indicated by this news story in the New York Times

The government does not contemplate entering upon inflation of the currency at present and will issue cheaper money only as a last resort to stimulate trade, according to a close adviser of the President who discussed financial policies with him this week. This official asserted today that the President was well satisfied with the business improvement and the government’s ability to borrow money at cheap rates. These are interpreted as good signs, and if the conditions continue as the recovery program broadened, it was believed no real inflation of the currency would be necessary. (“Inflation Putt Off, Officials Suggest,” New York Times, August 4, 1933)

If only . . .

Cluelessness about Strategy, Tactics and Discretion

In his op-ed in the weekend Wall Street Journal, John Taylor restates his confused opposition to what Ben Bernanke calls the policy of constrained discretion followed by the Federal Reserve during his tenure at the Fed, as vice-chairman under Alan Greenspan from 2003 to 2005 and as Chairman from 2005 to 2013. Taylor has been arguing for the Fed to adopt what he calls the “rules-based monetary policy” supposedly practiced by the Fed while Paul Volcker was chairman (at least from 1981 onwards) and for most of Alan Greenspan’s tenure until 2003 when, according to Taylor, the Fed abandoned the “rules-based monetary rule” that it had followed since 1981. In a recent post, I explained why Taylor’s description of Fed policy under Volcker was historically inaccurate and why his critique of recent Fed policy is both historically inaccurate and conceptually incoherent.

Taylor denies that his steady refrain calling for a “rules-based policy” (i.e., the implementation of some version of his beloved Taylor Rule) is intended “to chain the Fed to an algebraic formula;” he just thinks that the Fed needs “an explicit strategy for setting the instruments” of monetary policy. Now I agree that one ought not to set a policy goal without a strategy for achieving the goal, but Taylor is saying that he wants to go far beyond a strategy for achieving a policy goal; he wants a strategy for setting instruments of monetary policy, which seems like an obvious confusion between strategy and tactics, ends and means.

Instruments are the means by which a policy is implemented. Setting a policy goal can be considered a strategic decision; setting a policy instrument a tactical decision. But Taylor is saying that the Fed should have a strategy for setting the instruments with which it implements its strategic policy.  (OED, “instrument – 1. A thing used in or for performing an action: a means. . . . 5. A tool, an implement, esp. one used for delicate or scientific work.”) This is very confused.

Let’s be very specific. The Fed, for better or for worse – I think for worse — has made a strategic decision to set a 2% inflation target. Taylor does not say whether he supports the 2% target; his criticism is that the Fed is not setting the instrument – the Fed Funds rate – that it uses to hit the 2% target in accordance with the Taylor rule. He regards the failure to set the Fed Funds rate in accordance with the Taylor rule as a departure from a rules-based policy. But the Fed has continually undershot its 2% inflation target for the past three years. So the question naturally arises: if the Fed had raised the Fed Funds rate to the level prescribed by the Taylor rule, would the Fed have succeeded in hitting its inflation target? If Taylor thinks that a higher Fed Funds rate than has prevailed since 2012 would have led to higher inflation than we experienced, then there is something very wrong with the Taylor rule, because, under the Taylor rule, the Fed Funds rate is positively related to the difference between the actual inflation rate and the target rate. If a Fed Funds rate higher than the rate set for the past three years would have led, as the Taylor rule implies, to lower inflation than we experienced, following the Taylor rule would have meant disregarding the Fed’s own inflation target. How is that consistent with a rules-based policy?

It is worth noting that the practice of defining a rule in terms of a policy instrument rather than in terms of a policy goal did not originate with John Taylor; it goes back to Milton Friedman who somehow convinced a generation of monetary economists that the optimal policy for the Fed would be to target the rate of growth of the money supply at a k-percent annual rate. I have devoted other posts to explaining the absurdity of Friedman’s rule, but the point that I want to emphasize now is that Friedman, for complicated reasons which I think (but am not sure) that I understand, convinced himself that (classical) liberal principles require that governments and government agencies exercise their powers only in accordance with explicit and general rules that preclude or minimize the exercise of discretion by the relevant authorities.

Friedman’s confusions about his k-percent rule were deep and comprehensive, as a quick perusal of Friedman’s chapter 3 in Capitalism and Freedom, “The Control of Money,” amply demonstrates. In practice, the historical gold standard was a mixture of gold coins and privately issued banknotes and deposits as well as government banknotes that did not function particularly well, requiring frequent and significant government intervention. Unlike, a pure gold currency in which, given the high cost of extracting gold from the ground, the quantity of gold money would change only gradually, a mixed system of gold coin and banknotes and deposits was subject to large and destabilizing fluctuations in quantity. So, in Friedman’s estimation, the liberal solution was to design a monetary system such that the quantity of money would expand at a slow and steady rate, providing the best of all possible worlds: the stability of a pure gold standard and the minimal resource cost of a paper currency. In making this argument, as I have shown in an earlier post, Friedman displayed a basic misunderstanding of what constituted the gold standard as it was historically practiced, especially during its heyday from about 1880 to the outbreak of World War I, believing that the crucial characteristic of the gold standard was the limitation that it imposed on the quantity of money, when in fact the key characteristic of the gold standard is that it forces the value of money – regardless of its material content — to be equal to the value of a specified quantity of gold. (This misunderstanding – the focus on control of the quantity of money as the key task of monetary policy — led to Friedman’s policy instrumentalism – i.e., setting a policy rule in terms of the quantity of money.)

Because Friedman wanted to convince his friends in the Mont Pelerin Society (his egregious paper “Real and Pseudo Gold Standards” was originally presented at a meeting of the Mont Pelerin Society), who largely favored the gold standard, that (classical) liberal principles did not necessarily entail restoration of the gold standard, he emphasized a distinction between what he called the objectives of monetary policy and the instruments of monetary policy. In fact, in the classical discussion of the issue by Friedman’s teacher at Chicago, Henry Simons, in an essay called “Rules versus Authorities in Monetary Policy,” Simons also tried to formulate a rule that would be entirely automatic, operating insofar as possible in a mechanical fashion, even considering the option of stabilizing the quantity of money. But Simons correctly understood that any operational definition of money is necessarily arbitrary, meaning that there will always be a bright line between what is money under the definition and what is not money, even though the practical difference between what is on one side of the line and what is on the other will be slight. Thus, the existence of near-moneys would make control of any monetary aggregate a futile exercise. Simons therefore defined a monetary rule in terms of an objective of monetary policy: stabilizing the price level. Friedman did not want to settle for such a rule, because he understood that stabilizing the price level has its own ambiguities, there being many ways to measure the price level as well as theoretical problems in constructing index numbers (the composition and weights assigned to components of the index being subject to constant change) that make any price index inexact. Given Friedman’s objective — demonstrating that there is a preferable alternative to the gold standard evaluated in terms of (classical) liberal principles – a price-level rule lacked the automatism that Friedman felt was necessary to trump the gold standard as a monetary rule.

Friedman therefore made his case for a monetary rule in terms of the quantity of money, ignoring Simons powerful arguments against trying to control the quantity of money, stating the rule in general terms and treating the selection of an operational definition of money as a mere detail. Here is how Friedman put it:

If a rule is to be legislated, what rule should it be? The rule that has most frequently been suggested by people of a generally liberal persuasion is a price level rule; namely, a legislative directive to the monetary authorities that they maintain a stable price level. I think this is the wrong kind of a rule [my emphasis]. It is the wrong kind of a rule because it is in terms of objectives that the monetary authorities do not have the clear and direct power to achieve by their own actions. It consequently raises the problem of dispersing responsibilities and leaving the authorities too much leeway.

As an aside, I note that Friedman provided no explanation of why such a rule would disperse responsibilities. Who besides the monetary authority did Friedman think would have responsibility for controlling the price level under such a rule? Whether such a rule would give the monetary authorities “too much leeway” is of course an entirely different question.

There is unquestionably a close connection between monetary actions and the price level. But the connection is not so close, so invariable, or so direct that the objective of achieving a stable price level is an appropriate guide to the day-to-day activities of the authorities. (p. 53)

Friedman continues:

In the present state of our knowledge, it seems to me desirable to state the rule in terms of the behavior of the stock of money. My choice at the moment would be a legislated rule instructing the monetary authority to achieve a specified rate of growth in the stock of money. For this purpose, I would define the stock of money as including currency outside commercial banks plus all deposits of commercial banks. I would specify that the Reserve System shall see to it [Friedman’s being really specific there, isn’t he?] that the total stock of money so defined rises month by month, and indeed, so far as possible day by day, at an annual rate of X per cent, where X is some number between 3 and 5. (p. 54)

Friedman, of course, deliberately ignored, or, more likely, simply did not understand, that the quantity of deposits created by the banking system, under whatever definition, is no more under the control of the Fed than the price level. So the whole premise of Friedman’s money supply rule – that it was formulated in terms of an instrument under the immediate control of the monetary authority — was based on the fallacy that quantity of money is an instrument that the monetary authority is able to control at will.

I therefore note, as a further aside, that in his latest Wall Street Journal op-ed, Taylor responded to Bernanke’s observation that the Taylor rule becomes inoperative when the rule implies an interest-rate target below zero. Taylor disagrees:

The zero bound is not a new problem. Policy rule design research took that into account decades ago. The default was to move to a stable money growth regime not to massive asset purchases.

Taylor may regard the stable money growth regime as an acceptable default rule when the Taylor rule is sidelined at the zero lower bound. But if so, he is caught in a trap of his own making, because, whether he admits it or not, the quantity of money, unlike the Fed Funds rate, is not an instrument under the direct control of the Fed. If Taylor rejects an inflation target as a monetary rule, because it grants too much discretion to the monetary authority, then he must also reject a stable money growth rule, because it allows at least as much discretion as does an inflation target. Indeed, if the past 35 years have shown us anything it is that the Fed has much more control over the price level and the rate of inflation than it has over the quantity of money, however defined.

This post is already too long, but I think that it’s important to say something about discretion, which was such a bugaboo for Friedman, and remains one for Taylor. But the concept of discretion is not as simple as it is often made out to be, especially by Friedman and Taylor, and if you are careful to pay attention to what the word means in ordinary usage, you will see that discretion does not necessarily, or usually, refer to an unchecked authority to act as one pleases. Rather it suggests that a certain authority to make a decision is being granted to a person or an official, but the decision is to be made in light of certain criteria or principles that, while not fully explicit, still inform and constrain the decision.

The best analysis of what is meant by discretion that I know of is by Ronald Dworkin in his classic essay “Is Law a System of Rules?” Dworkin discusses the meaning of discretion in the context of a judge deciding a “hard case,” a case in which conflicting rules of law seem to be applicable, or a case in which none of the relevant rules seems to fit the facts of the case. Such a judge is said to exercise discretion, because his decision is not straightforwardly determined by the existing set of legal rules. Legal positivists, against whom Dworkin was arguing, would say that the judge is able, and called upon, to exercise his discretion in deciding the case, meaning, that by deciding the case, the judge is simply imposing his will. It is something like the positivist view that underlies Friedman’s intolerance for discretion.

Countering the positivist view, Dworkin considers the example of a sergeant ordered by his lieutenant to take his five most experienced soldiers on patrol, and reflects on how to interpret an observer’s statement about the orders: “the orders left the sergeant a great deal of discretion.” It is clear that, in carrying out his orders, the sergeant is called upon to exercise his judgment, because he is not given a metric for measuring the experience of his soldiers. But that does not mean that when he chooses five soldiers to go on patrol, he is engaging in an exercise of will. The decision can be carried out with good judgment or with bad judgment, but it is an exercise of judgment, not will, just as a judge, in deciding a hard case, is exercising his judgment, on a more sophisticated level to be sure than the sergeant choosing soldiers, not just indulging his preferences.

If the Fed is committed to an inflation target, then, by choosing a setting for its instrumental target, the Fed Funds rate, the Fed is exercising judgment in light of its policy goals. That exercise of judgment in pursuit of a policy goal is very different from the arbitrary behavior of the Fed in the 1970s when its decisions were taken with no clear price-level or inflation target and with no clear responsibility for hitting the target.

Ben Bernanke has described the monetary regime in which the Fed’s decisions are governed by an explicit inflation target and a subordinate commitment to full employment as one of “constrained discretion.” When using this term, Taylor always encloses it in quotations markets, apparently to suggest that the term is an oxymoron. But that is yet another mistake; “constrained discretion” is no oxymoron. Indeed, it is a pleonasm, the exercise of discretion usually being understood to mean not an unconstrained exercise of will, but an exercise of judgment in the light of relevant goals, policies, and principles.

PS I apologize for not having responded to comments recently. I will try to catch up later this week.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

What Is the Historically Challenged, Rule-Worshipping John Taylor Talking About?

A couple of weeks ago, I wrote a post chiding John Taylor for his habitual verbal carelessness. As if that were not enough, Taylor, in a recent talk at the IMF, appearing on a panel on monetary policy with former Fed Chairman Ben Bernanke and the former head of the South African central bank, Gill Marcus,  extends his trail of errors into new terrain: historical misstatement. Tony Yates and Paul Krugman have already subjected Taylor’s talk to well-deserved criticism for its conceptual confusion, but I want to focus on the outright historical errors Taylor blithely makes in his talk, a talk noteworthy, apart from its conceptual confusion and historical misstatements, for the incessant repetition of the meaningless epithet “rules-based,” as if he were a latter-day Homeric rhapsodist incanting a sacred text.

Taylor starts by offering his own “mini history of monetary policy in the United States” since the late 1960s.

When I first started doing monetary economics . . ., monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

What Taylor means by “interventionist,” other than establishing that he is against it, is not clear. Nor is the meaning of “bust” in this context. The recession of 1970 was perhaps the mildest of the entire post-World War II era, and the 1974-75 recession was certainly severe, but it was largely the result of a supply shock and politically imposed wage and price controls exacerbated by monetary tightening. (See my post about 1970s stagflation.) Taylor talks about the Fed’s lofty goals, but doesn’t say what they were. In fact in the 1970s, the Fed was disclaiming responsibility for inflation, and Arthur Burns, a supposedly conservative Republican economist, appointed by Nixon to be Fed Chairman, actually promoted what was then called an “incomes policy,” thereby enabling and facilitating Nixon’s infamous wage-and-price controls. The Fed’s job was to keep aggregate demand high, and, in the widely held view at the time, it was up to the politicians to keep business and labor from getting too greedy and causing inflation.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century.

Yes, in the early 1980s, policy did change, and it did become more focused, and for a short time – about a year and a half – it did become more rules-based. (I have no idea what “systematic” means in this context.) And the result was the sharpest and longest post-World War II downturn until the Little Depression. Policy changed, because, under Volcker, the Fed took ownership of inflation. It became more rules-based, because, under Volcker, the Fed attempted to follow a modified sort of Monetarist rule, seeking to keep the growth of the monetary aggregates within a pre-determined target range. I have explained in my book and in previous posts (e.g., here and here) why the attempt to follow a Monetarist rule was bound to fail and why the attempt would have perverse feedback effects, but others, notably Charles Goodhart (discoverer of Goodhart’s Law), had identified the problem even before the Fed adopted its misguided policy. The recovery did not begin until the summer of 1982 after the Fed announced that it would allow the monetary aggregates to grow faster than the Fed’s targets.

So the success of the Fed monetary policy under Volcker can properly be attributed to a) to the Fed’s taking ownership of inflation and b) to its decision to abandon the rules-based policy urged on it by Milton Friedman and his Monetarist acolytes like Alan Meltzer whom Taylor now cites approvingly for supporting rules-based policies. The only monetary policy rule that the Fed ever adopted under Volcker having been scrapped prior to the beginning of the recovery from the 1981-82 recession, the notion that the Great Moderation was ushered in by the Fed’s adoption of a “rules-based” policy is a total misrepresentation.

But Taylor is not done.

Few complained about spillovers or beggar-thy-neighbor policies during the Great Moderation.  The developed economies were effectively operating in what I call a nearly international cooperative equilibrium.

Really! Has Professor Taylor, who served as Under Secretary of the Treasury for International Affairs ever heard of the Plaza and the Louvre Accords?

The Plaza Accord or Plaza Agreement was an agreement between the governments of France, West Germany, Japan, the United States, and the United Kingdom, to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets. The five governments signed the accord on September 22, 1985 at the Plaza Hotel in New York City. (“Plaza Accord” Wikipedia)

The Louvre Accord was an agreement, signed on February 22, 1987 in Paris, that aimed to stabilize the international currency markets and halt the continued decline of the US Dollar caused by the Plaza Accord. The agreement was signed by France, West Germany, Japan, Canada, the United States and the United Kingdom. (“Louvre Accord” Wikipedia)

The chart below shows the fluctuation in the trade weighted value of the US dollar against the other major trading currencies since 1980. Does it look like there was a nearly international cooperative equilibrium in the 1980s?

taylor_dollar_tradeweighted

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation.  You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%.

Well, in 1997 the expansion was six years old and the unemployment rate was under 5% and falling. In 2003, the expansion was barely under way and unemployment was rising above 6%.

I could provide other dubious historical characterizations that Taylor makes in his talk, but I will just mention a few others relating to the Volcker episode.

Some argue that the historical evidence in favor of rules is simply correlation not causation.  But this ignores the crucial timing of events:  in each case, the changes in policy occurred before the changes in performance, clear evidence for causality.  The decisions taken by Paul Volcker came before the Great Moderation.

Yes, and as I pointed out above, inflation came down when Volcker and the Fed took ownership of the inflation, and were willing to tolerate or inflict sufficient pain on the real economy to convince the public that the Fed was serious about bringing the rate of inflation down to a rate of roughly 4%. But the recovery and the Great Moderation did not begin until the Fed renounced the only rule that it had ever adopted, namely targeting the rate of growth of the monetary aggregates. The Fed, under Volcker, never even adopted an explicit inflation target, much less a specific rule for setting the Federal Funds rate. The Taylor rule was just an ex post rationalization of what the Fed had done by instinct.

Another point relates to the zero bound. Wasn’t that the reason that the central banks had to deviate from rules in recent years? Well it was certainly not a reason in 2003-2005 and it is not a reason now, because the zero bound is not binding. It appears that there was a short period in 2009 when zero was clearly binding. But the zero bound is not a new thing in economics research. Policy rule design research took that into account long ago. The default was to move to a stable money growth regime not to massive asset purchases.

OMG! Is Taylor’s preferred rule at the zero lower bound the stable money growth rule that Volcker tried, but failed, to implement in 1981-82? Is that the lesson that Taylor wants us to learn from the Volcker era?

Some argue that rules based policy for the instruments is not needed if you have goals for the inflation rate or other variables. They say that all you really need for effective policy making is a goal, such as an inflation target and an employment target. The rest of policymaking is doing whatever the policymakers think needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok as long as you can justify it at the moment in terms of the goal.

This approach has been called “constrained discretion” by Ben Bernanke, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as a “rules versus discretion” dichotomy might suggest.  Simply having a specific numerical goal or objective is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics.  I think the evidence shows that relying solely on constrained discretion has not worked for monetary policy.

Taylor wants a rule for the instruments of policy. Well, although Taylor will not admit it, a rule for the instruments of policy is precisely what Volcker tried to implement in 1981-82 when he was trying — and failing — to target the monetary aggregates, thereby driving the economy into a rapidly deepening recession, before escaping from the positive-feedback loop in which he and the economy were trapped by scrapping his monetary growth targets. Since 2009, Taylor has been calling for the Fed to raise the currently targeted instrument, the Fed Funds rate, even though inflation has been below the Fed’s 2% target almost continuously for the past three years. Not only does Taylor want to target the instrument of policy, he wants the instrument target to preempt the policy target. If that is not all tactics and no strategy, I don’t know what is.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

JKH on the Keynesian Cross and Accounting Identities

Since beginning this series of posts about accounting identities and their role in the simple Keynesian model, I have received a lot of comments from various commenters, but none has been more persistent, penetrating, and patient in his criticisms than JKH, and I have to say that he has forced me to think very carefully, more carefully than I had ever done before, about my objections to forcing the basic Keynesian model to conform to the standard national income accounting identities. So, although we have not (yet?) reached common ground about how to understand the simple Keynesian model, I can say that my own understanding of how the model works (or doesn’t) is clearer than it was when the series started, so I am grateful to JKH for engaging me in this discussion, even though it has gone on a lot longer than I expected, or really wanted, it to.

In response to my previous post in the series, JKH offered a lengthy critical response. Finding his response difficult to understand and confusing, I wrote a rejoinder that prompted JKH to write a series of further comments. Being preoccupied with a couple of other posts and life in general, I was unable to respond to JKH until now. Given the delay in my response, I decided to respond to JKH in a separate post. I start with JKH’s explanation of how an increase in investment spending is accounted for.

First, the investment injection creates income that accrues to the factors of production – labor and capital. This works through cost accounting. The price at which the investment good is sold covers all costs – including the cost of capital. That said, the price may not cover the theoretical “hurdle rate” for the cost of capital. But that is a technical detail. The equity holders earn some sort of actual residual return, positive or negative. So in the more general sense, the actual cost of capital is accounted for.

So the investment injection creates an equivalent amount of income.

No one says that investment expenditure will not generate an equivalent amount of income; what is questionable is whether the income accrues to factors of production instantaneously. JKH maintains that cost accounting ensures that the accrual is instantaneous, but the recording of a bookkeeping entry is not the same as the receipt of income by households, whose consumption and savings decisions are the key determinant of income adjustments in the Keynesian model. See the tacit assumption in the sentence immediately following.

Consider the effect at the moment the income is fully accrued to the factors of production – before anything else happens.

I understand this to mean that income accrues to factors of production the instant expenditure is booked by the manufacturer of the investment goods; otherwise, I don’t understand why this occurs “before anything else happens.” In a numerical example, JKH posits an increase in investment spending of 100, which triggers added production of 100. For purposes of this discussion, I stipulate that there is no lag between expenditure and output, but I don’t accept that income must accrue to workers and owners of the firm instantaneously as output occurs. Most workers are paid per unit of time, wages being an hourly rate based on the number of hours credited per pay period, and salaries being a fixed amount per pay period. So there is no immediate and direct relationship between worker input into the production process and the remuneration received. The additional production associated with the added investment expenditure and production may or may not be associated with any additional payments to labor depending on how much slack capacity is available to firms and on how the remuneration of workers employed in producing the investment goods is determined.

That amount of income must be saved by the macroeconomy – other things equal. We know this because no new consumer goods or services are produced in this initial standalone scenario of a new investment injection. Therefore, given that saving in the generic sense is income not used to purchase consumer goods and services, this new income created by an assumed investment injection must be saved in the first instance.

Since it is quite conceivable (especially if there is unused capacity available to the firm) that producing new investment goods will not change the total remuneration received by (or owed to) workers in the current period, all additional revenue collected by the firm accruing entirely to the owners of the firm, revenue that might not be included in the next scheduled dividend payment by the firm to shareholders, I am not persuaded that it is unreasonable to assume that there is a lag between expenditure on goods and services and the accrual of income to factors of production. At any rate, whether the firm’s revenue is instantaneously transmuted into household income does not seem to be a question that can be answered in only one way.

What the macroeconomy “must” do is an interesting question, but in the basic Keynesian model, income is earned by households, and it is households, not an abstraction called the macroeconomy, that decide how much to consume and how much to save out of their income. So, in the Keynesian model, regardless of the accounting identities, the relevant saving activity – the saving activity specified by the marginal propensity to save — is the saving of households. That doesn’t mean that the model cannot be extended or reconstructed to allow for saving to be carried out by business firms or by other entities, but that is not how the model, at its most basic level, is set up.

So at this incipient stage before the multiplier process starts, S equals I. That’s before the marginal propensity to consume or save is in motion.

One’s eyes may roll at this point, since the operation of the MPC includes the complementary MPS, and the MPS is a saving function that also operates as the multiplier iterates with successive waves of income creation and consumption.

I understand these two sentences to be an implicit concession that the basic Keynesian model is not being presented in the way it is normally presented in textbooks, a concession that accords with my view that the basic Keynesian model does not always dovetail with the national income identities. Lipsey and I say: don’t impose the accounting identities on the Keynesian model when they are at odds; JKH says reconfigure the basic Keynesian model so that it is consistent with the accounting identities. Where JKH and I may perhaps agree is that the standard textbook story about the adjustment process following a change in spending parameters, in which unintended inventory accumulation corresponding to the frustration of individual plans plays a central role, does not follow from the basic Keynesian model.

So one may ask – how can these apparently opposing ideas be reconciled – the contention that S equals I at a point when the multiplier saving dynamic hasn’t even started?

The investment injection results in an equivalent quantity of income and saving as described earlier. I think you question this off the top while I have claimed it must be the case. But please suspend disbelief for purposes of what I want to describe next, because given that assumed starting point, this should at least reinforce the idea that S = I at all times following that same assumption for the investment injection.

It must be the case, if you define income and expenditure to be identical. If you define them so that they are not identical, which seems both possible and reasonable, then savings and investment are also not identical.

So now assume that the first round of the multiplier math works and there is an initial consumption burst of quantity 66, representing the MPC effect on the income of 100 that was just newly created.

And correspondingly there is new saving of 33.

A pertinent question then is how this gets reflected in income accounting.

As a simplification, assume that the factors of the investment good production who received the new income of 100 are the ones who spend the 66.

So the economy has earned 100 in its factors of investment good production capacity and has now spent 66 in its MPC capacity.

Recall that at the investment injection stage considered on its own, before the multiplier starts to work, the economy saved 100.

Yes, that’s fine if income does accrue simultaneously with expenditure, but that depends on how one chooses to define and measure income, and I don’t feel obligated to adopt the standard accounting definition under all circumstances. (And is it really the case that only one way of defining income is countenanced by accountants?) At any rate, in my first iteration of the lagged model, I specified the lag so that income was earned by households at the end of the period with consumption becoming a function of income in the preceding period. In that setup, the accounting identities were indeed satisfied. However, even with the lag specified that way, the main features of the adjustment process stressed in textbook treatments – frustrated plans, and involuntary inventory accumulation or decumulation – were absent.

Then, in the first stage of the multiplier, the economy spent 66 on consumption. For simplicity of exposition, I’ve assumed those who initially saved were the ones who then spent (I.e. the factors of investment production) But no more income has been assumed to be earned by them. So they have dissaved 66 in the second stage. At the same time, those who produced the 66 of consumer goods have earned 66 as factors of production for those consumer goods. But the consumer goods they produced have been purchased. So there are no remaining consumer goods for them to purchase with their income of 66. And that means they have saved 66.

Therefore, the net saving result of the first round of the multiplier effect is 0.

Thus an MPS of 1/3 has resulted in 0 incremental saving for the macroeconomy. That is because the opening saving of 100 by the factors of production for the investment good has only been redistributed as cumulative saving as between 33 for the investment good production factors and 66 for the consumer good production factors. So the amount of cumulative S still equals the amount of original S, which equals I. And the important observation is that the entire quantity of saving was created originally and at the outset as equivalent to the income earned by the factors of the investment good production.

There is no logical problem here given the definitional imputation of income to households in the initial period before any payments to households have actually been made. However, the model has to be reinterpreted so that household consumption and savings decisions are a function of income earned in the previous period.

Each successive round of the multiplier features a similar combination of equal dissaving and saving.

The result is that cumulative saving remains constant at 100 from the outset and I = S remains in tact always.

The important point is that an original investment injection associated with a Keynesian multiplier process accounts for all the macroeconomic saving to come out of that process, and the MPS fallout of the MPC sequence accounts for none of it.

That is fine, but to get that result, you have to amend the basic Keynesian model or make consumption a function the previous period’s income, which is consistent with what I showed in my first iteration of the lagged model. But that iteration also showed that savings has a somewhat different meaning from the meaning usually attached to the term, saving or dissaving corresponding to a passive accumulation of funds associated with income exceeding or falling short of what it was expected to be in a given period.

JKH followed up this comment with another one explaining how, within the basic Keynesian model, a change in investment (or in some other expenditure parameter) causes a sequence of adjustments from the old equilibrium to a new equilibrium.

Assume the economy is at an alleged equilibrium point – at the intersection of a planned expenditure line with the 45 degree line.

Suppose planned investment falls by 100. Again, assume MPC = 2/3.

The scenario is one in which investment will be 100 lower than its previous level (bearing in mind we are referring to the level of investment flows here).

Using comparable logic as in my previous comment, that means that both I and S drop by 100 at the outset. There is that much less investment injected and saving created as a result of the economy not operating at a counterfactual level of activity equal to its previous pace.

So expenditure drops by 100 – and that considered just on its own can be represented by a direct vertical drop from the previous equilibrium point down to the planning line.

But as I have said before, such a point is unrealizable in fact, because it lies off the 45 degree line. And that corresponds to the fact that I of 100 generates S of 100 (or in this case a decline in I from previous levels means a decline in S from previous levels). So what happens is that instead of landing on that 100 vertical drop down point, the economy combines (in measured effect) that move with a second move horizontally to the left, where it lands on the 45 degree line at a point where both E and Y have declined by 100. This simply reflects the fact that I = S at all times as described in my previous comment (which again I realize is a contentious supposition for purposes of the broader discussion).

Actually, it is clear that being off the 45-degree line is not a matter of possibility in any causal or behavioral sense, but is simply a matter of how income and expenditure are defined. With income and expenditure suitably defined, income need not equal expenditure. As just shown, if one wants to define income and expenditure so that they are equal at all times, a temporal adjustment process can be derived if current consumption is made a function of income in the previous period (presumably with an implicit behavioral assumption that households expect to earn the same income in the current period that they earned in the previous period). The adjustment can be easily portrayed in the familiar Keynesian cross, provided that the lag is incorporated into the diagram by measuring E(t) on the vertical axis and measures Y(t-1) on the horizontal axis. The 45-degree line then represents the equilibrium condition that E(t) = Y(t-1), which implies (given the implicit behavioral assumption) that actual income equals expected income or that income is unchanged period to period. Obviously, in this setup, the economy can be off the 45-degree line. Following a change in investment, an adjustment process moves from the old expenditure line to the new one continuing in stepwise fashion from the new expenditure line to the 45-degree line and back in successive periods converging on the point of intersection between the new expenditure line and the 45-degree line.

This happens in steps representable by discrete accounting. Common sense suggests that a “plan” can consist of a series of such discrete steps – in which case there is a ratcheting of reduced investment injections down the 45 degree line – or a plan can consist of a single discrete step depending on the scale or on the preference for stepwise analysis. The single discrete step is the clearest way to analyse the accounting record for the economics.

There is no such “plan” in the model, because no one foresees where the adjustment is leading; households assume in each period that their income will be what it was in the previous period, and firms produce exactly what consumers demand without change in inventories. However, all expenditure planned at the beginning of each period is executed (every household remaining on its planned expenditure curve), but households wind up earning less than expected in each period. Suitably amended, I consider this statement to be consistent with Lipsey’s critique of standard textbook expositions of the Keynesian cross adjustment process wherein the adjustment to a new equilibrium is driven by the frustration of plans.

Finally, some brief responses to JKH’s comments on handling lags.

I’m going to refer to standard accounting for Y as Y and the methodology used in the post as LGY (i.e. “Lipsey – Glasner income” ).

Then:

E ( t ) = Y ( t )

E ( t ) = LGY ( t +1)

Standard accounting recognizes income in the time period in which it is earned.

LGY accounting recognizes income in the time period in which it is paid in cash.

Consider the point in table 1 where the MPC propensity factor drops from .9 to .8. . . .

In the first iteration, E is 900 ( 100 I + 800 C ) but LGY is 1000.

Household saving is shown to be 200.

Here is how standard accounting handles that:

First, a real world example. Suppose a US corporation listed on a stock exchange reports its financial results at the end of each calendar quarter. And suppose it pays its employees once a month. But for each month’s work it pays them at the start of the next month.

Then there is no way that this corporation would report it’s December 31 financial results without showing a liability on its balance sheet for the employee compensation earned in December but not yet paid by December 31. . . .

In effect, the employees have loaned the corporation one months salary until that loan is repaid in the next accounting period.

The corporation will properly list a liability on its balance sheet for wages not yet paid. This may be a “loan in effect,” but employees don’t receive an IOU for the unpaid wages because the wages are not yet due. I am no tax expert, but I am guessing that a liability to pay taxes on the wages owed to, but not yet received by, employees is incurred until the wages are paid, notwithstanding whatever liability is recorded on the books of the corporation. A worker employed in 2014, but not paid until 2015, will owe taxes on his 2015, not 2014, tax return. A “loan in effect” is not the same as an actual payment.

This is precisely what is happening at the macro level in the LGY lag example.

So the standard national income accounting would show E = Y = 900, with a business liability of 900 at the end of the period. Households would have a corresponding financial asset of 900.

The “financial asset” in question is a fiction. There is a claim, but the claim at the end of the period has not fallen due, so it represents a claim to an expected future payment. I expect to get a royalty check next month, for copies of my book sold last year. I don’t consider that I have received income until the check arrives from my publisher, regardless of how the publisher chooses to record its liability to me on its books. And I will not pay any tax on books sold in 2014 until 2016 when I file my 2015 tax return. And I certainly did not consider the expected royalties as income last year when the books were sold. In fact, I don’t know — and never will — when in 2014 the books were sold.

Back at the beginning of that same period, business repaid the prior period liability of 1000 to households. But they received cash revenue of 900 during the period. So as the post says, business cash would have declined by 100 during the period.

This component of 100 when received by households is part of a loan repayment in effect. This does not constitute a component of standard income accounting Y or S for households. This sort of thing is captured In flow of funds accounting.

Just as LGY is the delayed payment of Y earned in the previous period, LGS overstates S by the difference between LGY and Y.

For example, when E is 900, LGY is 1000 and Y is 900. LGS is 200 while S is 100.

So under regular accounting, this systematic LG overstatement reflects the cash repayment of a loan – not the differential receipt of income and saving.

That is certainly a possible interpretation of the assumptions being made, but obviously there are alternative interpretations that are completely consistent with workings of the basic Keynesian model.

And another way of describing this is that households earn Y of 900 and get paid in the same period in the form of a non-cash financial asset of 900, which is in effect a loan to business for the amount of cash that business owes to households for the income the latter have already earned. That loan is repaid in the next period.

Again, I observe that “payments in effect” are being created to avoid working with and measuring actual payments as they take place. I have no problem with such “payments in effect,” but that does not mean that the the magnitudes of interest can be measured in only one way.

There are several ironies in the comparison of LG accounting with standard accounting.

First, using standard accounting in no way impedes the analysis of cash flow lags. In fact, this is the reason for separate balance sheet and flow of funds accounting – so as not to conflate cash flow analysis with the earning of income when there are clear separations between the earning of income and the cash payments to the recipients of that income. The 3 part framework is precise in its treatment of such situations.

Not sure where the irony is. In any event, I don’t see how the 3 part framework adds anything to our understanding of the Keynesian model.

Second, in the scenario constructed for the post, there is no logical connection between a delayed income payment of 1000 and a decision to ramp down consumption propensity. Why would one choose to consume less because an income payment is systematically late? If that was the case, one would ramp down consumption every time a payment was delayed. But every such payment is delayed in this model. Changes in consumption propensity cannot logically be a systematic function of a systematic lag – or consumption propensity would systematically approach 0, which is obviously nonsensical.

This seems to be a misunderstanding of what I wrote. I never suggested that the lag between expenditure and income is connected (logically or otherwise) to the reduction in the marginal propensity to consume. A lag is necessary for there to be a sequential rather than an instantaneous adjustment process to a parameter change in the model, such as a reduced marginal propensity to consume. There is no other connection.

Third, my earlier example of a corporation that delayed an income payment from December until January is a stretch on reality. Corporations have no valid reason to play such cash management games that span accounting periods. They must account for legitimate liabilities that are outstanding when proceeding to the next accounting period.

I never suggested that corporations are playing a game. Wage payments, royalty and dividend payments are made according to fixed schedules, which may not coincide with the relevant time period for measuring economic activity. Fiscal years and calendar years do not always coincide.

Shorter term intra period lags may still exist – as within a one month income payment cycle. But again, so what? There cannot be systemic behavior to reduce consumption propensity due to systematic lags. Moreover, a lot of people get paid every 2 weeks. But that is not even the relevant point. Standard accounting handles any of these issues even at the level of internal management accounting accruals between external financial reporting dates.

I never suggested that the propensity to consume is related to the lag structure in the model. The propensity to consume determines the equilibrium; the lag structure determines the sequence of adjustments, following a change in a spending parameter, from one equilibrium to another.

PS I apologize for this excessively long — even by my long-winded and verbose standards — post.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 331 other followers


Follow

Get every new post delivered to your Inbox.

Join 331 other followers