Archive Page 2

Making Sense of Rational Expectations

Almost two months ago I wrote a provocatively titled post about rational expectations, in which I argued against the idea that it is useful to make the rational-expectations assumption in developing a theory of business cycles. The title of the post was probably what led to the start of a thread about my post on the econjobrumors blog, the tenor of which  can be divined from the contribution of one commenter: “Who on earth is Glasner?” But, aside from the attention I received on econjobrumors, I also elicited a response from Scott Sumner

David Glasner has a post criticizing the rational expectations modeling assumption in economics:

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational.

I see two mistakes here. Not everyone must have identical expectations in a world of rational expectations. Now it’s true that there are ratex models where people are simply assumed to have identical expectations, such as representative agent models, but that modeling assumption has nothing to do with rational expectations, per se.

In fact, the rational expectations hypothesis suggests that people form optimal forecasts based on all publicly available information. One of the most famous rational expectations models was Robert Lucas’s model of monetary misperceptions, where people observed local conditions before national data was available. In that model, each agent sees different local prices, and thus forms different expectations about aggregate demand at the national level.

It is true that not all expectations must be identical in a world of rational expectations. The question is whether those expectations are compatible with the equilibrium of the model in which those expectations are embedded. If any of those expectations are incompatible with the equilibrium of the model, then, if agents’ decision are based on their expectations, the model will not arrive at an equilibrium solution. Lucas’s monetary misperception model was a clever effort to tweak the rational-expectations assumption just enough to allow for a temporary disequilibrium. But the attempt was a failure, because Lucas could only generate a one-period deviation from equilibrium, which was too little for the model to pose as a plausible account of a business cycle. That provided Kydland and Prescott the idea to discard Lucas’s monetary misperceptions idea and write their paper on real business cycles without adulterating the rational expectations assumption.

Here’s what Muth said about the rational expectations assumption in the paper in which he introduced “rational expectations” as a modeling strategy.

In order to explain these phenomena, I should like to suggest that expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory. At the risk of confusing this purely descriptive hypothesis with a pronouncement as to what firms ought to do, we call such expectations “rational.”

The hypothesis can be rephrased a little more precisely as follows: that expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes).

The hypothesis asserts three things: (1) Information is scarce, and the economic system generally does not waste it. (2) The way expectations are formed depends specifically on the structure of the relevant system describing the economy. (3) A “public prediction,” in the sense of Grunberg and Modigliani, will have no substantial effect on the operation of the economic system (unless it is based on inside information).

It does not assert that the scratch work of entrepreneurs resembles the system of equations in any way; nor does it state that predictions of entrepreneurs are perfect or that their expectations are all the same. For purposes of analysis, we shall use a specialized form of the hypothesis. In particular, we assume: 1. The random disturbances are normally distributed. 2. Certainty equivalents exist for the variables to be predicted. 3. The equations of the system, including the expectations formulas, are linear. These assumptions are not quite so strong as may appear at first because any one of them virtually implies the other two.

It seems to me that Muth was confused about what the rational-expectations assumption entails. He asserts that the expectations of entrepreneurs — and presumably that applies to other economic agents as well insofar as their decisions are influenced by their expectations of the future – should be assumed to be exactly what the relevant economic model predicts the expected outcomes to be. If so, I don’t see how it can be maintained that expectations could diverge from each other. If what entrepreneurs produce next period depends on the price they expect next period, then how is it possible that the total supply produced next period is independent of the distribution of expectations as long as the errors are normally distributed and the mean of the distribution corresponds to the equilibrium of the model? This could only be true if the output produced by each entrepreneur was a linear function of the expected price and all entrepreneurs had identical marginal costs or if the distribution of marginal costs was uncorrelated with the distribution of expectations. The linearity assumption is hardly compelling unless you assume that the system is in equilibrium and all changes are small. But making that assumption is just another form of question begging.

It’s also wrong to say:

But if expectations are not realized, expectations were not rational.

Scott is right. What I said was wrong. What I ought to have said is: “But if expectations (being divergent) could not have been realized, those expectations were not rational.”

Suppose I am watching the game of roulette. I form the expectation that the ball will not land on one of the two green squares. Now suppose it does. Was my expectation rational? I’d say yes—there was only a 2/38 chance of the ball landing on a green square. It’s true that I lacked perfect foresight, but my expectation was rational, given what I knew at the time.

I don’t think that Scott’s response is compelling, because you can’t judge the rationality of an expectation in isolation, it has to be judged in a broader context. If you are forming your expectation about where the ball will fall in a game of roulette, the rationality of that expectation can only be evaluated in the context of how much you should be willing to bet that the ball will fall on one of the two green squares and that requires knowledge of what the payoff would be if the ball did fall on one of those two squares. And that would mean that someone else is involved in the game and would be taking an opposite position. The rationality of expectations could only be judged in the context of what everyone participating in the game was expecting and what the payoffs and penalties were for each participant.

In 2006, it might have been rational to forecast that housing prices would not crash. If you lived in many countries, your forecast would have been correct. If you happened to live in Ireland or the US, your forecast would have been incorrect. But it might well have been a rational forecast in all countries.

The rationality of a forecast can’t be assessed in isolation. A forecast is rational if it is consistent with other forecasts, so that it, along with the other forecasts, could potentially be realized. As a commenter on Scott’s blog observed, a rational expectation is an expectation that, at the time the forecast is made, is consistent with the relevant model. The forecast of housing prices may turn out to be incorrect, but the forecast might still have been rational when it was made if the forecast of prices was consistent with what the relevant model would have predicted. The failure of the forecast to be realized could mean either that forecast was not consistent with the model, or that between the time of the forecast and the time of its realization, new information,  not available at the time of the forecast, came to light and changed the the prediction of the relevant model.

The need for context in assessing the rationality of expectations was wonderfully described by Thomas Schelling in his classic analysis of cooperative games.

One may or may not agree with any particular hypothesis as to how a bargainer’s expectations are formed either in the bargaining process or before it and either by the bargaining itself or by other forces. But it does seem clear that the outcome of a bargaining process is to be described most immediately, most straightforwardly, and most empirically, in terms of some phenomenon of stable and convergent expectations. Whether one agrees explicitly to a bargain, or agrees tacitly, or accepts by default, he must if he has his wits about him, expect that he could do no better and recognize that the other party must reciprocate the feeling. Thus, the fact of an outcome, which is simply a coordinated choice, should be analytically characterized by the notion of convergent expectations.

The intuitive formulation, or even a careful formulation in psychological terms, of what it is that a rational player expects in relation to another rational player in the “pure” bargaining game, poses a problem in sheer scientific description. Both players, being rational, must recognize that the only kind of “rational” expectation they can have is a fully shared expectation of an outcome. It is not quite accurate – as a description of a psychological phenomenon – to say that one expects the second to concede something; the second’s readiness to concede or to accept is only an expression of what he expects the first to accept or to concede, which in turn is what he expects the first to expect the second to expect the first to expect, and so on. To avoid an “ad infinitum” in the description process, we have to say that both sense a shared expectation of an outcome; one’s expectation is a belief that both identify the outcome as being indicated by the situation, hence as virtually inevitable. Both players, in effect, accept a common authority – the power of the game to dictate its own solution through their intellectual capacity to perceive it – and what they “expect” is that they both perceive the same solution.

Viewed in this way, the intellectual process of arriving at “rational expectations” in the full-communication “pure” bargaining game is virtually identical with the intellectual process of arriving at a coordinated choice in the tacit game. The actual solutions might be different because the game contexts might be different, with different suggestive details; but the intellectual nature of the two solutions seems virtually identical since both depend on an agreement that is reached by tacit consent. This is true because the explicit agreement that is reached in the full communication game corresponds to the a prioir expectations that were reached (or in theory could have been reached) jointly but independently by the two players before the bargaining started. And it is a tacit “agreement” in the sense that both can hold confident rational expectation only if both are aware that both accept the indicated solution in advance as the outcome that they both know they both expect.

So I agree that rational expectations can simply mean that agents are forming expectations about the future incorporating as best as they can all the knowledge available to them. This is a weak common sense interpretation of rational expectations that I think is what Scott Sumner has in mind when he uses the term “rational expectations.” But in the context of formal modelling, rational expectations has a more restrictive meaning, which is that given all the information available, the expectations of all agents in the model must correspond to what the model itself predicts given that information. Even though Muth himself and others have tried to avoid the inference that all agents must have expectations that match the solution of the model, given the information underlying the model, the assumptions under which agents could hold divergent expectations are, in their own way, just as restrictive as the assumption that agents hold convergent expectations.

In a way, the disconnect between a common-sense understanding of what “rational expectations” means and what “rational expectations” means in the context of formal macroeconomic models is analogous to the disconnect between what “competition” means in normal discourse and what “competition” (and especially “perfect competition”) means in the context of formal microeconomic models. Much of the rivalrous behavior between competitors that we think of as being essential aspects of competition and the competitive process is simply ruled out by the formal assumption of perfect competition.

OMG! The Age of Trump Is upon Us

UPDATE (11/11, 10:47 am EST): Clinton’s lead in the popular vote is now about 400,000 and according to David Leonhardt of the New York Times, the lead is likely to increase to as much as 2 million votes by the time all the votes are counted.

Here’s a little thought experiment for you to ponder. Suppose that the outcome of yesterday’s election had been reversed and Hillary Clinton emerged with 270+ electoral votes but trailed Donald Trump by 200,000 popular votes. What would the world be like today? What would we be hearing from Trump and his entourage about the outcome of the election? I daresay we would be hearing about “second amendment remedies” from many of the Trumpsters. I wonder how that would have played out.

(As I write this, I am hearing news reports about rowdy demonstrations in a number of locations against Trump’s election. Insofar as these demonstrations become violent, they are certainly deplorable, but nothing we have heard from Clinton and her campaign or from leaders of the Democratic Party would provide any encouragement for violent protests against the outcome of a free election.)

But enough of fantasies about an alternative universe; in the one that we happen to inhabit, the one in which Donald Trump is going to be sworn in as President of the United States in about ten weeks, we are faced with this stark reality. The American voters, in their wisdom, have elected a mountebank (OED: “A false pretender to skill or knowledge, a charlatan: a person incurring contempt or ridicule through efforts to acquire something, esp. social distinction or glamour.”), a narcissistic sociopath, as their chief executive and head of state. The success of Trump’s demagogic campaign – a campaign repackaging the repugnant themes of such successful 20th century American demagogues as Huey Long, Father Coughlin and George Wallace (not to mention not so successful ones like the deplorable Pat Buchanan) — is now being celebrated by Trump apologists and Banana Republican sycophants as evidence of his political genius in sensing and tapping into the anger and frustrations of the forgotten white working class, as if the anger and frustration of the white working class has not been the trump card that every two-bit demagogue and would-be despot of the last 150 has tried to play. Some genius.

I recently overheard a conversation between a close friend of mine who is a Trump supporter and a non-Trump supporter. My friend is white, but is not one of the poorly educated of whom Trump is so fond, holding a Ph.D. in physics, and being well read and knowledgeable about many subjects. Although he doesn’t like Trump, he is very conservative and can’t stand Clinton, so he decided to vote for Trump without any apparent internal struggle or second thoughts. One of his reasons for favoring Trump is his opposition to Obamacare, which he blames for the very large increase in premiums he has to pay for the medical insurance he gets through his employer. When it was pointed out to him that it is unlikely that the increase in his insurance premiums was caused by Obamacare, his response was that Obamacare has added to the regulations that insurance companies must comply with, so that the cost of those regulations is ultimately borne by those buying insurance, which means that his insurance premiums must have gone up because of Obamacare.

Since I wasn’t part of the conversation, I didn’t interrupt to point out that the standard arguments about the costs of regulation being ultimately borne by consumers of the regulated product don’t necessarily apply to markets like health care in which customers don’t have good information about whether suppliers are providing them with the services that they need or are instead providing unnecessary services to enrich themselves. In such markets, third-parties (i.e., insurance companies) supposedly better informed than patients about whether the services provided to patients by their doctors are really serving the patients’ interests, and are really worth the cost of providing those services, can help protect the interests of patients. Of course, the interests of insurance companies aren’t necessarily aligned very well with the interests of their policyholders either, because insurance companies may prefer not to pay for treatments that it would be in the interests of patients to receive.

So in health markets there are doctors treating ill-informed patients whose bills are being paid by insurance companies that try to monitor doctors to make sure that doctors do not provide unnecessary services and treatments to patients. But since the interests of insurance companies may be not to pay doctors to provide services that would be beneficial to patients, who is going to protect policyholders from the insurance companies? Well, um, maybe the government should be involved. Yes, but how do we know if the government is doing a good job or bad job of looking out for the interests of patients? I don’t think that we know the answer to that question. But Obamacare, aside from making medical insurance more widely available to people who need it, is an attempt to try to make insurance companies more responsive to the interests of their policyholders. Perhaps not the smartest attempt, by any means, but given the system of health care delivery that has evolved in the United States over the past three quarters of a century, it is not obviously a step in the wrong direction.

But even if Obamacare is not working well, and I have no well thought out opinion about whether it is or isn’t, the kind of simple-minded critique that my friend was making seemed to me to be genuinely cringe-worthy. Here is a Ph.D. in physics making an argument that sounded as if it were coming straight out of the mouth of Sean Hannity. OMG! The dumbing down of America is being expertly engineered by Fox News, and, boy, are they succeeding. Geniuses, that’s what they are. Geniuses!

When I took my first economics course almost a half century ago and read the greatest economics textbook ever written, University Economics by Armen Alchian and William Allen, I was blown away by their ability to show how much sloppy and muddled thinking there was about how markets work and how controls that prevent prices from allocating resources don’t eliminate destructive or wasteful competition, but rather shift competition from relatively cheap modes like offering to pay a higher price or to accept a lower price to relatively costly forms like waiting in line or lobbying a regulator to gain access to a politically determined allocation system.

I have been a fan of free markets ever since. I oppose government intervention in the economy as a default position. But the lazy thinking that once led people to assume that government regulation is the cure for all problems now leads people to assume that government regulation is the cause of all problems. What a difference half a century makes.

So You Don’t Think the Stock Market Cares Who Wins the Election — Think Again UPDATE

UPDATE (October 30 (9:22pm EDST): Commenter BJH correctly finds a basic flaw in my little attempt to infer some causation between Trump’s effect on the peso, the peso’s correlation with the S&P 500 and Trump’s effect on the stock market. The correlation cannot bear the weight I put on it. See my reply to BJH below.

The little swoon in the stock markets on Friday afternoon after FBI Director James Comey announced that the FBI was again investigating Hillary Clinton’s emails coincided with a sharp drop in the Mexican peso, whose value is widely assumed to be a market barometer of the likelihood of Trump’s victory. A lot of people have wondered why the stock market has not evidenced much concern about the prospect of a Trump presidency, notwithstanding his surprising success at, and in, the polls. After all, the market recovered from a rough start at the beginning of 2016 even as Trump was racking up victory after victory over his competitors for the Republican presidential nomination. And even after Trump’s capture of the Republican nomination was seen as inevitable, even though many people did start to panick, the stock markets have been behaving as if they were under heavy sedation.

So I thought that I would do a little checking on how the market has been behaving since April, when it had become clear that, barring a miracle, Trump was going to be the Republican nominee for President. Here is a chart showing the movements in the S&P 500 and in the dollar value of the Mexican peso since April 1 (normalized at their April 1 values). The stability in the two indexes is evident. The difference between the high and low values of the S&P 500 has been less than 7 percent; the peso has fluctuated more than the S&P 500, presumably because of Mexico’s extreme vulnerability to Trumpian policies, but the difference between the high and low values of the peso has been only about 12%.

trump_1

But what happens when you look at the daily changes in the S&P 500 and in the peso? Looking at the changes, rather than the levels, can help identify what is actually moving the markets. Taking the logarithms of the S&P 500 and of the peso (measured in cents) and calculating the daily changes in the logarithms gives the daily percentage change in the two series. The next chart plots the daily percentage changes in the S&P 500 and the peso since April 4. The chart looks pretty remarkable to me; the correlation between changes in the peso and change in the S&P 500 is striking.

trump_2

A quick regression analysis on excel produces the following result:

∆S&P = 0.0002 + .5∆peso, r-squared = .557,

where ∆S&P is the daily percentage change in the S&P 500 and ∆peso is the daily percentage change in the dollar value of the peso. The t-value on the peso coefficient is 13.5, which, in a regression with only 147 observations, is an unusually high level of statistical significance.

This result says that almost 56% of the observed daily variation in the S&P 500 between April 4 and October 28 of 2016 is accounted for by the observed daily variation in the peso. To be precise, the result doesn’t mean that there is any causal relationship between changes in the value of the peso and changes in the S&P 500. Correlation does not establish causation. It could be the case that the regression is simply reflecting the existence of causal factors that are common to both the Mexican peso and to the S&P 500 and affect both of them at the same time. Now it seems pretty obvious who or what has been the main causal factor affecting the value of the peso, so I leave it as an exercise for readers to identify what factor has been affecting the S&P 500 these past few months, and in which direction.

Rational Expectations, or, The Road to Incoherence

J. W. Mason left a very nice comment on my recent post about Paul Romer’s now-famous essay on macroeconomics, a comment now embedded in his interesting and insightful blog post on the Romer essay. As a wrote in my reply to Mason’s comment, I really liked the way he framed his point about rational expectations and intertemporal equilibrium. Sometimes when you see a familiar idea expressed in a particular way, the novelty of the expression, even though it’s not substantively different from other ways of expressing the idea, triggers a new insight. And that’s what I think happened in my own mind as I read Mason’s comment. Here’s what he wrote:

David Glasner’s interesting comment on Romer makes in passing a point that’s bugged me for years — that you can’t talk about transitions from one intertemporal equilibrium to another, there’s only the one. Or equivalently, you can’t have a model with rational expectations and then talk about what happens if there’s a “shock.” To say there is a shock in one period, is just to say that expectations in the previous period were wrong. Glasner:

the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

So the further point that I would make, after reading Mason’s comment, is just this. For an intertemporal equilibrium to exist, there must be a complete set of markets for all future periods and contingent states of the world, or, alternatively, there must be correct expectations shared by all agents about all future prices and the probability that each contingent future state of the world will be realized. By the way, If you think about it for a moment, the notion that probabilities can be assigned to every contingent future state of the world is mind-bogglingly unrealistic, because the number of contingent states must rapidly become uncountable, because every single contingency itself gives rise to further potential contingencies, and so on and on and on. But forget about that little complication. What intertemporal equilibrium requires is that all expectations of all individuals be in agreement – or at least not be inconsistent, some agents possibly having an incomplete set of expectations about future prices and future states of the world. If individuals differ in their expectations, so that their planned future purchases and sales are based on what they expect future prices to be when the time comes for those transactions to be carried out, then individuals will not be able to execute their plans as intended when at least one of them finds that actual prices are different from what they had been expected to be.

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational. So the whole New Classical modeling strategy of identifying shocks  to a system in rational-expectations equilibrium, and “predicting” the responses to these shocks as if they had been anticipated is self-contradictory and incoherent.

Price Stickiness Is a Symptom not a Cause

In my recent post about Nick Rowe and the law of reflux, I mentioned in passing that I might write a post soon about price stickiness. The reason that I thought it would be worthwhile writing again about price stickiness (which I have written about before here and here), because Nick, following a broad consensus among economists, identifies price stickiness as a critical cause of fluctuations in employment and income. Here’s how Nick phrased it:

An excess demand for land is observed in the land market. An excess demand for bonds is observed in the bond market. An excess demand for equities is observed in the equity market. An excess demand for money is observed in any market. If some prices adjust quickly enough to clear their market, but other prices are sticky so their markets don’t always clear, we may observe an excess demand for money as an excess supply of goods in those sticky-price markets, but the prices in flexible-price markets will still be affected by the excess demand for money.

Then a bit later, Nick continues:

If individuals want to save in the form of money, they won’t collectively be able to if the stock of money does not increase.There will be an excess demand for money in all the money markets, except those where the price of the non-money thing in that market is flexible and adjusts to clear that market. In the sticky-price markets there will nothing an individual can do if he wants to buy more money but nobody else wants to sell more. But in those same sticky-price markets any individual can always sell less money, regardless of what any other individual wants to do. Nobody can stop you selling less money, if that’s what you want to do.

Unable to increase the flow of money into their portfolios, each individual reduces the flow of money out of his portfolio. Demand falls in stick-price markets, quantity traded is determined by the short side of the market (Q=min{Qd,Qs}), so trade falls, and some traders that would be mutually advantageous in a barter or Walrasian economy even at those sticky prices don’t get made, and there’s a recession. Since money is used for trade, the demand for money depends on the volume of trade. When trade falls the flow of money falls too, and the stock demand for money falls, until the representative individual chooses a flow of money out of his portfolio equal to the flow in. He wants to increase the flow in, but cannot, since other individuals don’t want to increase their flows out.

The role of price stickiness or price rigidity in accounting for involuntary unemployment is an old and complicated story. If you go back and read what economists before Keynes had to say about the Great Depression, you will find that there was considerable agreement that, in principle, if workers were willing to accept a large enough cut in their wages, they could all get reemployed. That was a proposition accepted by Hawtry and by Keynes. However, they did not believe that wage cutting was a good way of restoring full employment, because the process of wage cutting would be brutal economically and divisive – even self-destructive – politically. So they favored a policy of reflation that would facilitate and hasten the process of recovery. However, there also those economists, e.g., Ludwig von Mises and the young Lionel Robbins in his book The Great Depression, (which he had the good sense to disavow later in life) who attributed high unemployment to an unwillingness of workers and labor unions to accept wage cuts and to various other legal barriers preventing the price mechanism from operating to restore equilibrium in the normal way that prices adjust to equate the amount demanded with the amount supplied in each and every single market.

But in the General Theory, Keynes argued that if you believed in the standard story told by microeconomics about how prices constantly adjust to equate demand and supply and maintain equilibrium, then maybe you should be consistent and follow the Mises/Robbins story and just wait for the price mechanism to perform its magic, rather than support counter-cyclical monetary and fiscal policies. So Keynes then argued that there is actually something wrong with the standard microeconomic story; price adjustments can’t ensure that overall economic equilibrium is restored, because the level of employment depends on aggregate demand, and if aggregate demand is insufficient, wage cutting won’t increase – and, more likely, would reduce — aggregate demand, so that no amount of wage-cutting would succeed in reducing unemployment.

To those upholding the idea that the price system is a stable self-regulating system or process for coordinating a decentralized market economy, in other words to those upholding microeconomic orthodoxy as developed in any of the various strands of the neoclassical paradigm, Keynes’s argument was deeply disturbing and subversive.

In one of the first of his many important publications, “Liquidity Preference and the Theory of Money and Interest,” Franco Modigliani argued that, despite Keynes’s attempt to prove that unemployment could persist even if prices and wages were perfectly flexible, the assumption of wage rigidity was in fact essential to arrive at Keynes’s result that there could be an equilibrium with involuntary unemployment. Modigliani did so by positing a model in which the supply of labor is a function of real wages. It was not hard for Modigliani to show that in such a model an equilibrium with unemployment required a rigid real wage.

Modigliani was not in favor of relying on price flexibility instead of counter-cyclical policy to solve the problem of involuntary unemployment; he just argued that the rationale for such policies had to be that prices and wages were not adjusting immediately to clear markets. But the inference that Modigliani drew from that analysis — that price flexibility would lead to an equilibrium with full employment — was not valid, there being no guarantee that price adjustments would necessarily lead to equilibrium, unless all prices and wages instantaneously adjusted to their new equilibrium in response to any deviation from a pre-existing equilibrium.

All the theory of general equilibrium tells us is that if all trading takes place at the equilibrium set of prices, the economy will be in equilibrium as long as the underlying “fundamentals” of the economy do not change. But in a decentralized economy, no one knows what the equilibrium prices are, and the equilibrium price in each market depends in principle on what the equilibrium prices are in every other market. So unless the price in every market is an equilibrium price, none of the markets is necessarily in equilibrium.

Now it may well be that if all prices are close to equilibrium, the small changes will keep moving the economy closer and closer to equilibrium, so that the adjustment process will converge. But that is just conjecture, there is no proof showing the conditions under which a simple rule that says raise the price in any market with an excess demand and decrease the price in any market with an excess supply will in fact lead to the convergence of the whole system to equilibrium. Even in a Walrasian tatonnement system, in which no trading at disequilibrium prices is allowed, there is no proof that the adjustment process will eventually lead to the discovery of the equilibrium price vector. If trading at disequilibrium prices is allowed, tatonnement is hopeless.

So the real problem is not that prices are sticky but that trading takes place at disequilibrium prices and there is no mechanism by which to discover what the equilibrium prices are. Modern macroeconomics solves this problem, in its characteristic fashion, by assuming it away by insisting that expectations are “rational.”

Economists have allowed themselves to make this absurd assumption because they are in the habit of thinking that the simple rule of raising price when there is an excess demand and reducing the price when there is an excess supply inevitably causes convergence to equilibrium. This habitual way of thinking has been inculcated in economists by the intense, and largely beneficial, training they have been subjected to in Marshallian partial-equilibrium analysis, which is built on the assumption that every market can be analyzed in isolation from every other market. But that analytic approach can only be justified under a very restrictive set of assumptions. In particular it is assumed that any single market under consideration is small relative to the whole economy, so that its repercussions on other markets can be ignored, and that every other market is in equilibrium, so that there are no changes from other markets that are impinging on the equilibrium in the market under consideration.

Neither of these assumptions is strictly true in theory, so all partial equilibrium analysis involves a certain amount of hand-waving. Nor, even if we wanted to be careful and precise, could we actually dispense with the hand-waving; the hand-waving is built into the analysis, and can’t be avoided. I have often referred to these assumptions required for the partial-equilibrium analysis — the bread and butter microeconomic analysis of Econ 101 — to be valid as the macroeconomic foundations of microeconomics, by which I mean that the casual assumption that microeconomics somehow has a privileged and secure theoretical position compared to macroeconomics and that macroeconomic propositions are only valid insofar as they can be reduced to more basic microeconomic principles is entirely unjustified. That doesn’t mean that we shouldn’t care about reconciling macroeconomics with microeconomics; it just means that the validity of proposition in macroeconomics is not necessarily contingent on being derived from microeconomics. Reducing macroeconomics to microeconomics should be an analytical challenge, not a methodological imperative.

So the assumption, derived from Modigliani’s 1944 paper that “price stickiness” is what prevents an economic system from moving automatically to a new equilibrium after being subjected to some shock or disturbance, reflects either a misunderstanding or a semantic confusion. It is not price stickiness that prevents the system from moving toward equilibrium, it is the fact that individuals are engaging in transactions at disequilibrium prices. We simply do not know how to compare different sets of non-equilibrium prices to determine which set of non-equilibrium prices will move the economy further from or closer to equilibrium. Our experience and out intuition suggest that in some neighborhood of equilibrium, an economy can absorb moderate shocks without going into a cumulative contraction. But all we really know from theory is that any trading at any set of non-equilibrium prices can trigger an economic contraction, and once it starts to occur, a contraction may become cumulative.

It is also a mistake to assume that in a world of incomplete markets, the missing markets being markets for the delivery of goods and the provision of services in the future, any set of price adjustments, however large, could by themselves ensure that equilibrium is restored. With an incomplete set of markets, economic agents base their decisions not just on actual prices in the existing markets; they base their decisions on prices for future goods and services which can only be guessed at. And it is only when individual expectations of those future prices are mutually consistent that equilibrium obtains. With inconsistent expectations of future prices, the adjustments in current prices in the markets that exist for currently supplied goods and services that in some sense equate amounts demanded and supplied, lead to a (temporary) equilibrium that is not efficient, one that could be associated with high unemployment and unused capacity even though technically existing markets are clearing.

So that’s why I regard the term “sticky prices” and other similar terms as very unhelpful and misleading; they are a kind of mental crutch that economists are too ready to rely on as a substitute for thinking about what are the actual causes of economic breakdowns, crises, recessions, and depressions. Most of all, they represent an uncritical transfer of partial-equilibrium microeconomic thinking to a problem that requires a system-wide macroeconomic approach. That approach should not ignore microeconomic reasoning, but it has to transcend both partial-equilibrium supply-demand analysis and the mathematics of intertemporal optimization.

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

Putin v. Obama: It’s the Economy, Stupid

A couple of days ago, Daniel Drezner wrote an op-ed for the Washington Post commenting on a statement made by one of the candidates in the recent televised forum on national security.

Last week at a televised presidential forum on national security, Donald Trump continued his pattern of praising Russian President Vladimir Putin. In particular, Trump said the following:

I mean, the man has very strong control over a country. And that’s a very different system and I don’t happen to like the system. But certainly in that system he’s been a leader, far more than our president has been a leader. We have a divided country.

As my Post colleague David Weigel notes, this is simply Trump’s latest slathering of praise onto the Russian strongman:

Trump goes further than many Republicans. In his telling, Putin — a “strong leader” — epitomizes how any serious president should position his country in the world. Knowingly or not, Trump builds on years of wistful, sometimes ironic praise of Putin as a swaggering, bare-chested autocrat.

After the forum, his running mate, Mike Pence, who used to be more critical of Putin, doubled down on Trump’s claim:

Pence walked that line back a little Sunday, suggesting that he was trying to indict the “weak and feckless leadership” of President Obama — but you get the point.

Well, if we are going to compare the leadership of Putin and Obama, why not compare them by measuring what people really care about? After all, don’t we all know that “it’s the economy, stupid.”

So let’s see how what Putin’s leadership has done for Russia compares with what Obama’s leadership has done for the US? We all know that the last eight years under Obama have not been the greatest, but if it’s Putin that Obama is being compared to, we ought to check out how Putin’s “very strong” leadership has worked out for the Russian economy as opposed to Obama’s “weak and feckless leadership” has worked out for the US economy.

Here’s a little graph comparing US and Russian GDP between 2008 and 2015. To make the comparison on an even playing field, I have normalized GDP for both countries at 1.0 in 2008.

putin_v_obamaLooks to me like Obama wins that leadership contest pretty handily. And it’s not getting any better for Putin in 2016, as the Russian economy continues to contract, while the US economy continues to expand, albeit slowly, in 2016.

So chalk one up for the home team.

USA! USA!

Nick Rowe Ignores, But Does Not Refute, the Law of Reflux

In yet another splendid post, Nick Rowe once again explains what makes money – the medium of exchange – so special. Money – the medium of exchange – is the only commodity that is traded in every market. Unlike every other commodity, each of which has a market of its very own, in which it – and only it – is traded (for money!), money has no market of its own, because money — the medium of exchange — is traded in every other market.

This distinction is valid and every important, and Nick is right to emphasize it, even obsess about it. Here’s how Nick described it his post:

1. If you want to increase the stock of land in your portfolio, there’s only one way to do it. You must increase the flow of land into your portfolio, by buying more land.

If you want to increase the stock of bonds in your portfolio, there’s only one way to do it. You must increase the flow of bonds into your portfolio, by buying more bonds.

If you want to increase the stock of equities in your portfolio, there’s only one way to do it. You must increase the flow of equities into your portfolio, by buying more equities.

But if you want to increase the stock of money in your portfolio, there are two ways to do it. You can increase the flow of money into your portfolio, by buying more money (selling more other things for money). Or you can decrease the flow of money out of your portfolio, by selling less money (buying less other things for money).

An individual who wants to increase his stock of money will still have a flow of money out of his portfolio. But he will plan to have a bigger flow in than flow out.

OK, let’s think about this for a second. Again, I totally agree with Nick that money is traded in every market. But is it really the case that there is no market in which only money is traded? If there is no market in which only money is traded, how do we explain the quantity of money in existence at any moment of time as the result of an economic process? Is it – I mean the quantity of money — just like an external fact of nature that is inexplicable in terms of economic theory?

Well, actually, the answer is: maybe it is, and maybe it’s not. Sometimes, we do just take the quantity of money to be an exogenous variable determined by some outside – noneconomic – force, aka the Monetary Authority, which, exercising its discretion, determines – judiciously or arbitrarily, take your pick – The Quantity of Money. But sometimes we acknowledge that the quantity of money is actually determined by economic forces, and is not a purely exogenous variable; we say that money is endogenous. And sometimes we do both; we distinguish between outside (exogenous) money and inside (endogenous) money.

But if we do acknowledge that there is – or that there might be – an economic process that determines what the quantity of money is, how can we not also acknowledge that there is – or might be — some market – a market dedicated to money, and nothing but money – in which the quantity of money is determined? Let’s now pick up where I left off in Nick’s post:

2. There is a market where land is exchanged for money; a market where bonds are exchanged for money; a market where equities are exchanged for money; and markets where all other goods and services are exchanged for money. “The money market” (singular) is an oxymoron. The money markets (plural) are all those markets. A monetary exchange economy is not an economy with one central Walrasian market where anything can be exchanged for anything else. Every market is a money market, in a monetary exchange economy.

An excess demand for land is observed in the land market. An excess demand for bonds is observed in the bond market. An excess demand for equities is observed in the equity market. An excess demand for money might be observed in any market.

Yes, an excess demand for money might be observed in any market, as people tried to shed, or to accumulate, money by altering their spending on other commodities. But is there no other way in which people wishing to hold more or less money than they now hold could obtain, or dispose of, money as desired?

Well, to answer that question, it helps to ask another question: what is the economic process that brings (inside) money – i.e., the money created by a presumably explicable process of economic activity — into existence? And the answer is that ordinary people exchange their liabilities with banks (or similar entities) and in return they receive the special liabilities of the banks. The difference between the liabilities of ordinary individuals and the special liabilities of banks is that the liabilities of ordinary individuals are not acceptable as payment for stuff, but the special liabilities of banks are acceptable as payment for stuff. In other words, special bank liabilities are a medium of exchange; they are (inside) money. So if I am holding less (more) money than I would like to hold, I can adjust the amount I am holding by altering my spending patterns in the ways that Nick lists in his post, or I can enter into a transaction with a bank to increase (decrease) the amount of money that I am holding. This is a perfectly well-defined market in which the public exchanges “money-backing” instruments (their IOUs) with which the banks create the monetary instruments that the banks give the public in return.

Whenever the total amount of (inside) money held by the non-bank public does not equal the total amount of (inside) money in existence, there are market forces operating by which the non-bank public and the banks can enter into transactions whereby the amount of (inside) money is adjusted to eliminate the excess demand for (supply of) (inside) money. This adjustment process does not operate instantaneously, and sometimes it may even operate dysfunctionally, but, whether it operates well or not so well, the process does operate, and we ignore it at our peril.

The rest of Nick’s post dwells on the problems caused by “price stickiness.” I may try to write another post soon about “price stickiness,” so I will just make a brief further comment about one further statement made by Nick:

Unable to increase the flow of money into their portfolios, each individual reduces the flow of money out of his portfolio.

And my comment is simply that Nick is begging the question here. He is assuming that there is no market mechanism by which individuals can increase the flow of money into their portfolios. But that is clearly not true, because most of the money in the hands of the public now was created by a process in which individuals increased the flow of money into their portfolios by exchanging their own “money-backing” IOUs with banks in return for the “monetary” IOUs created by banks.

The endogenous process by which the quantity of monetary IOUs created by the banking system corresponds to the amount of monetary IOUs that the public wants to hold at any moment of time is what is known as the Law of Reflux. Nick may believe — and may even be right — that the Law of Reflux is invalid, but if that is what Nick believes, he needs to make an argument, not assume a conclusion.

Where Do Monetary Rules Come From and How Do They Work?

In my talk last week at the Mercatus Conference on Monetary Rules for a Post-Crisis World, I discussed how monetary rules and the thinking about monetary rules have developed over time. The point that I started with was that monetary rules become necessary only when the medium of exchange has a value that exceeds the cost of producing the medium of exchange. You don’t need a monetary rule if money is a commodity; people just trade stuff for stuff; it’s not barter, because everyone accepts one real commodity, making that commodity the medium of exchange. But there’s no special rule governing the monetary system beyond the rules that govern all forms of exchange. the first monetary rule came along only when something worth more than its cost of production was used as money. This might have happened when people accepted minted coins at face value, even though the coins were not full-bodied. But that situation was not a stable equilibrium, because eventually Gresham’s Law kicks in, and the bad money drives out the good, so that the value of coins drops to their metallic value rather than their face value. So no real monetary rule was operating to control the value of coinage in situations where the coinage was debased.

So the idea of an actual monetary rule to govern the operation of a monetary system only emerged when banks started to issue banknotes. Banknotes having a negligible cost of production, a value in excess of that negligible cost could be imparted to those essentially worthless banknotes only by banks undertaking a commitment — a legally binding obligation — to make those banknotes redeemable (convertible) for a fixed weight of gold or silver or some other valuable material whose supply was not under the control of the bank itself. This convertibility commitment can be thought of as a kind of rule, but convertibility was not originally undertaken as a policy rule; it was undertaken simply as a business expedient; it was the means by which banks could create a demand for the banknotes that they wanted to issue to borrowers so that they could engage in the profitable business of financial intermediation.

It was in 1797, during the early stages of the British-French wars after the French Revolution, when, the rumor of a French invasion having led to a run on Bank of England notes, the British government prohibited the Bank of England from redeeming its banknotes for gold, and made banknotes issued by the Bank of England legal tender. The subsequent premium on gold in Continental commodity markets in terms of sterling – what was called the high price of bullion – led to a series of debates which engaged some of the finest economic minds in Great Britain – notably David Ricardo and Henry Thornton – over the causes and consequences of the high price of bullion and, if a remedy was in fact required, the appropriate policy steps to be taken to administer that remedy.

There is a vast literature on the many-sided Bullionist debates as they are now called, but my only concern here is with the final outcome of the debates, which was the appointment of a Parliamentary Commission, which included none other than the great Henry Thornton, himself, and two less renowned colleagues, William Huskisson and Francis Horner, who collaborated to write a report published in 1811 recommending the speedy restoration of convertibility of Bank of England notes. The British government and Parliament were unwilling to follow the recommendation while war with France was ongoing, however, there was a broad consensus in favor of the restoration of convertibility once the war was over.

After Napoleon’s final defeat in 1815, the process of restoring convertibility was begun with the intention of restoring the pre-1797 conversion rate between banknotes and gold. Parliament in fact enacted a statute defining the pound sterling as a fixed weight of gold. By 1819, the value of sterling had risen to its prewar level, and in 1821 the legal obligation of the Bank of England to convert its notes into gold was reinstituted. So the first self-consciously adopted monetary rule was the Parliamentary decision to restore the convertibility of banknotes issued by the Bank of England into a fixed weight of gold.

However, the widely held expectations that the restoration of convertibility of banknotes issued by the Bank of England into gold would produce a stable monetary regime and a stable economy were quickly disappointed, financial crises and depressions occurring in 1825 and again in 1836. To explain the occurrence of these unexpected financial crises and periods of severe economic distress, a group of monetary theorists advanced a theory based on David Hume’s discussion of the price-specie-flow mechanism in his essay “Of the Balance of Trade,” in which he explained the automatic tendency toward equilibrium in the balance of trade and stocks of gold and precious metals among nations. Hume carried out his argument in terms of a fully metallic (gold) currency, and, in other works, Hume decried the tendency of banks to issue banknotes to excess, thereby causing inflation and economic disturbances.

So the conclusion drawn by these monetary theorists was that the Humean adjustment process would work smoothly only if gold shipments into Britain or out of Britain would result in a reduction or increase in the quantity of banknotes exactly equal to the amount of gold flowing into or out of Britain. It was the failure of the Bank of England and the other British banks to follow the Currency Principle – the idea that the total amount of currency in the country should change by exactly the same amount as the total quantity of gold reserves in the country – that had caused the economic crises and disturbances marking the two decades since the resumption of convertibility in 1821.

Those advancing this theory of economic fluctuations and financial crises were known as the Currency School and they succeeded in persuading Sir Robert Peel, the Prime Minister to support legislation to require the Bank of England and the other British Banks to abide by the Currency Principle. This was done by capping the note issue of all banks other than the Bank of England at existing levels and allowing the Bank of England to increase its issue of banknotes only upon deposit of a corresponding quantity of gold bullion. The result was in effect to impose a 100% marginal reserve requirement on the entire British banking system. Opposition to the Currency School largely emanated from what came to be known as the Banking School, whose most profound theorist was John Fullarton who formulated the law of reflux, which focused attention on the endogenous nature of the issue of banknotes by commercial banks. According to Fullarton and the Banking School, the issue of banknotes by the banking system was not a destabilizing and disequilibrating disturbance, but a response to the liquidity demands of traders and dealers. Once these liquidity demands were satisfied, the excess banknotes, returning to the banks in the ordinary course of business, would be retired from circulation unless there was a further demand for liquidity from some other source.

The Humean analysis, abstracting from any notion of a demand for liquidity, was therefore no guide to the appropriate behavior of the quantity of banknotes. Imposing a 100% marginal reserve requirement on the supply of banknotes would make it costly for traders and dealers to satisfy their demands for liquidity in times of financial stress; rather than eliminate monetary disturbances, the statutory enactment of the Currency Principle would be an added source of financial disturbance and disorder.

With the support of Robert Peel and his government, the arguments of the Currency School prevailed, and the Bank Charter Act was enacted in 1844. In 1847, despite the hopes of its supporters that an era of financial tranquility would follow, a new financial crisis occurred, and the crisis was not quelled until the government suspended the Bank Charter Act, thereby enabling the Bank of England to lend to dealers and traders to satisfy their demands for liquidity. Again in 1857 and in 1866, crises occurred which could not be brought under control before the government suspended the Bank Charter Act.

So British monetary history in the first half of the nineteenth century provides us with two paradigms of monetary rules. The first is a price rule in which the value of a monetary instrument is maintained at a level above its cost of production by way of a convertibility commitment. Given the convertibility commitment, the actual quantity of the monetary instrument that is issued is whatever quantity the public wishes to hold. That, at any rate, was the theory of the gold standard. There were – and are – at least two basic problems with that theory. First, making the value of money equal to the value of gold does not imply that the value of money will be stable unless the value of gold is stable, and there is no necessary reason why the value of gold should be stable. Second, the behavior of a banking system may be such that the banking system will itself destabilize the value of gold, e.g., in periods of distress when the public loses confidence in the solvency of banks and banks simultaneously increase their demands for gold. The resulting increase in the monetary demand for gold drives up the value of gold, triggering a vicious cycle in which the attempt by each to increase his own liquidity impairs the solvency of all.

The second rule is a quantity rule in which the gold standard is forced to operate in a way that prevents the money supply from adjusting freely to variations in the demand for money. Such a rule makes sense only if one ignores or denies the possibility that the demand for money can change suddenly and unpredictably. The quantity rule is neither necessary nor sufficient for the gold standard or any monetary standard to operate. In fact, it is an implicit assertion that the gold standard or any metallic standard cannot operate, the operation of profit-seeking private banks and their creation of banknotes and deposits being inconsistent with the maintenance of a gold standard. But this is really a demand for abolition of the gold standard in which banknotes and deposits draw their value from a convertibility commitment and its replacement by a pure gold currency in which there is no distinction between gold and banknotes or deposits, banknotes and deposits being nothing more than a receipt for an equivalent physical amount of gold held in reserve. That is the monetary system that the Currency School aimed at achieving. However, imposing the 100% reserve requirement only on banknotes, they left deposits unconstrained, thereby paving the way for a gradual revolution in the banking practices of Great Britain between 1844 and about 1870, so that by 1870 the bulk of cash held in Great Britain was held in the form of deposits not banknotes and the bulk of business transactions in Britain were carried out by check not banknotes.

So Milton Friedman was working entirely within the Currency School monetary tradition, formulating a monetary rule in terms of a fixed quantity rather than a fixed price. And, in ultimately rejecting the gold standard, Friedman was merely following the logic of the Currency School to its logical conclusion, because what ultimately matters is the quantity rule not the price rule. For the Currency School, the price rule was redundant, a fifth wheel; the real work was done by the 100% marginal reserve requirement. Friedman therefore saw the gold standard as an unnecessary and even dangerous distraction from the ultimate goal of keeping the quantity of money under strict legal control.

It is in the larger context of Friedman’s position on 100% reserve banking, of which he remained an advocate until he shifted to the k-percent rule in the early 1960s, that his anomalous description of the classical gold standard of late nineteenth century till World War I as a pseudo-gold standard can be understood. What Friedman described as a real gold standard was a system in which only physical gold and banknotes and deposits representing corresponding holdings of physical gold circulate as media of exchange. But this is not a gold standard that has ever existed, so what Friedman called a real gold standard was actually just the gold standard of his hyperactive imagination.

Mercatus Center Conference on Monetary Rules for a Post-Crisis World

It’s been almost three weeks since my last post, which I think is my longest dry spell since I started blogging a little over five years ago. Aside from taking it a little easy during this really hot summer in Washington DC, I have been working on the paper I am supposed to present tomorrow at the Mercatus Center Conference on Monetary Rules for a Post-Crisis World.

After Scott Sumner opens the conference with welcoming remarks at 9AM, I will be speaking in the first panel starting at 9:10 AM. My paper is entitled “Rules versus Discretion, Historically Contemplated.” I hope soon to write a post summarizing some of what I have to say and to post a link to a draft of the paper. The conference proceedings are to be published in a forthcoming issue of the Journal of Macroeconomics.

I’m especially pleased to be on the same panel as one of my all-time favorite economists, David Laidler. That’s almost enough to lift me out of my chronic depression about the November elections. Other speakers include, Mark Calabria, Robert Hetzel, David Papell, Scott Sumner, John Taylor, Perry Mehrling, Kevin Sheedy, Walker Todd, David Beckworth, Miles Kimball, and Peter Ireland. A stellar cast, indeed. You can watch a live stream here.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,273 other followers

Follow Uneasy Money on WordPress.com