Archive Page 2

Samuelson Rules the Seas

I think Nick Rowe is a great economist; I really do. And on top of that, he recently has shown himself to be a very brave economist, fearlessly claiming to have shown that Paul Samuelson’s classic 1980 takedown (“A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade“) of David Hume’s classic 1752 articulation of the price-specie-flow mechanism (PSFM) (“Of the Balance of Trade“) was all wrong. Although I am a great admirer of Paul Samuelson, I am far from believing that he was error-free. But I would be very cautious about attributing an error in pure economic theory to Samuelson. So if you were placing bets, Nick would certainly be the longshot in this match-up.

Of course, I should admit that I am not an entirely disinterested observer of this engagement, because in the early 1970s, long before I discovered the Samuelson article that Nick is challenging, Earl Thompson had convinced me that Hume’s account of PSFM was all wrong, the international arbitrage of tradable-goods prices implying that gold movements between countries couldn’t cause the relative price levels of those countries in terms of gold to deviate from a common level, beyond the limits imposed by the operation of international commodity arbitrage. And Thompson’s reasoning was largely restated in the ensuing decade by Jacob Frenkel and Harry Johnson (“The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins”) and by Donald McCloskey and Richard Zecher (“How the Gold Standard Really Worked”) both in the 1976 volume on The Monetary Approach to the Balance of Payments edited by Johnson and Frenkel, and by David Laidler in his essay “Adam Smith as a Monetary Economist,” explaining why in The Wealth of Nations Smith ignored his best friend Hume’s classic essay on PSFM. So the main point of Samuelson’s takedown of Hume and the PSFM was not even original. What was original about Samuelson’s classic article was his dismissal of the rationalization that PSFM applies when there are both non-tradable and tradable goods, so that national price levels can deviate from the common international price level in terms of tradables, showing that the inclusion of tradables into the analysis serves only to slow down the adjustment process after a gold-supply shock.

So let’s follow Nick in his daring quest to disprove Samuelson, and see where that leads us.

Assume that durable sailing ships are costly to build, but have low (or zero for simplicity) operating costs. Assume apples are the only tradeable good, and one ship can transport one apple per year across the English Channel between Britain and France (the only countries in the world). Let P be the price of apples in Britain, P* be the price of apples in France, and R be the annual rental of a ship, (all prices measured in gold), then R=ABS(P*-P).

I am sorry to report that Nick has not gotten off to a good start here. There cannot be only tradable good. It takes two tango and two to trade. If apples are being traded, they must be traded for something, and that something is something other than apples. And, just to avoid misunderstanding, let me say that that something is also something other than gold. Otherwise, there couldn’t possibly be a difference between the Thompson-Frenkel-Johnson-McCloskey-Zecher-Laidler-Samuelson critique of PSFM and the PSFM. We need at least three goods – two real goods plus gold – providing a relative price between the two real goods and two absolute prices quoted in terms of gold (the numeraire). So if there are at least two absolute prices, then Nick’s equation for the annual rental of a ship R must be rewritten as follows R=ABS[P(A)*-P(A)+P(SE)*-P(SE)], where P(A) is the price of apples in Britain, P(A)* is the price of apples in France, P(SE) is the price of something else in Britain, and P(SE)* is the price of that same something else in France.

OK, now back to Nick:

In this model, the Law of One Price (P=P*) will only hold if the volume of exports of apples (in either direction) is unconstrained by the existing stock of ships, so rentals on ships are driven to zero. But then no ships would be built to export apples if ship rentals were expected to be always zero, which is a contradiction of the Law of One Price because arbitrage is impossible without ships. But an existing stock of ships represents a sunk cost (sorry) and they keep on sailing even as rentals approach zero. They sail around Samuelson’s Iceberg model (sorry) of transport costs.

This is a peculiar result in two respects. First, it suggests, perhaps inadvertently, that the law of price requires equality between the prices of goods in every location when in fact it only requires that prices in different locations not differ by more than the cost of transportation. The second, more serious, peculiarity is that with only one good being traded the price difference in that single good between the two locations has to be sufficient to cover the cost of building the ship. That suggests that there has to be a very large price difference in that single good to justify building the ship, but in fact there are at least two goods being shipped, so it is the sum of the price differences of the two goods that must be sufficient to cover the cost of building the ship. The more tradable goods there are, the smaller the price differences in any single good necessary to cover the cost of building the ship.

Again, back to Nick:

Start with zero exports, zero ships, and P=P*. Then suppose, like Hume, that some of the gold in Britain magically disappears. (And unlike Hume, just to keep it simple, suppose that gold magically reappears in France.)

Uh-oh. Just to keep it simple? I don’t think so. To me, keeping it simple would mean looking at one change in initial conditions at a time. The one relevant change – the one discussed by Hume – is a reduction in the stock of gold in Britain. But Nick is looking at two changes — a reduced stock of gold in Britain and an increased stock of gold in France — simultaneously. Why does it matter? Because the key point at issue is whether a national price level – i.e, Britain’s — can deviate from the international price level. In Nick’s two-country example, there should be one national price level and one international price level, which means that the only price level subject to change as a result of the change in initial conditions should be, as in Hume’s example, the British price level, while the French price level – representing the international price level – remained constant. In a two-country model, this can only be made plausible by assuming that France is large compared to Britain, so that a loss of gold could potentially affect the British price level without changing the French price level. Once again back to Nick.

The price of apples in Britain drops, the price of apples in France rises, and so the rent on a ship is now positive because you can use it to export apples from Britain to France. If that rent is big enough, and expected to stay big long enough, some ships will be built, and Britain will export apples to France in exchange for gold. Gold will flow from France to Britain, so the stock of gold will slowly rise in Britain and slowly fall in France, and the price of apples will likewise slowly rise in Britain and fall in France, so ship rentals will slowly fall, and the price of ships (the Present Value of those rents) will eventually fall below the cost of production, so no new ships will be built. But the ships already built will keep on sailing until rentals fall to zero or they rot (whichever comes first).

So notice what Nick has done. Instead of confronting the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique of Hume, which asserts that a world price level determines the national price level, Nick has simply begged the question by not assuming that the world price of gold, which determines the world price level, is constant. Instead, he posits a decreased value of gold in France, owing to an increased French stock of gold, and an increased value of gold in Britain, owing to a decreased British stock of gold, and then conflating the resulting adjustment in the value gold with the operation of commodity arbitrage. Why Nick thinks his discussion is relevant to the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique escapes me.

The flow of exports and hence the flow of specie is limited by the stock of ships. And only a finite number of ships will be built. So we observe David Hume’s price-specie flow mechanism playing out in real time.

This bugs me. Because it’s all sorta obvious really.

Yes, it bugs me, too. And, yes, it is obvious. But why is it relevant to the question under discussion, which is whether there is an international price level in terms of gold that constrains movements in national price levels in countries in which gold is the numeraire. In other words, if there is a shock to the gold stock of a small open economy, how much will the price level in that small open economy change? By the percentage change in the stock of gold in that country – as Hume maintained – or by the minisicule percentage change in the international stock of gold, gold prices in the country that has lost gold being constrained from changing by more than allowed by the cost of arbitrage operations? Nick’s little example is simply orthogonal to the question under discussion.

I skip Nick’s little exegetical discussion of Hume’s essay and proceed to what I think is the final substantive point that Nick makes.

Prices don’t just arbitrage themselves. Even if we take the limit of my model, as the cost of building ships approaches zero, we need to explain what process ensures the Law of One Price holds in equilibrium. Suppose it didn’t…then people would buy low and sell high…..you know the rest.

There are different equilibrium conditions being confused here. The equilibrium arbitrage conditions are not same as the equilibrium conditions for international monetary equilibrium. Arbitrage conditions for individual commodities can hold even if the international distribution of gold is not in equilibrium. So I really don’t know what conclusion Nick is alluding to here.

But let me end on what I hope is a conciliatory and constructive note. As always, Nick is making an insightful argument, even if it is misplaced in the context of Hume and PSFM. And the upshot of Nick’s argument is that transportation costs are a function of the dispersion of prices, because, as the incentive to ship products to capture arbitrage profits increases, the cost of shipping will increase as arbitragers bid up the value of resources specialized to the processes of transporting stuff. So the assumption that the cost of transportation can be treated as a parameter is not really valid, which means that the constraints imposed on national price level movements are not really parametric, they are endongenously determined within an appropriately specified general equilibrium model. If Nick is willing to settle for that proposition, I don’t think that our positions are that far apart.

Cyclical versus Secular Causes of Stagnation

Nick Rowe and Scott Sumner have recently had an interesting little debate about whether the slowdown in real GDP growth and labor productivity since the 2007-09 downturn is the result of cyclical or secular factors. Nick argues that successful inflation targeting in the two decades before the 2007 downturn had given rise to entrepreneurial expectations of stable aggregate demand, thereby providing a supportive macroeconomic environment for long-term investment that generates rising labor productivity over time. By undermining confidence in macroeconomic stability, the 2007-09 downturn diminished the willingness of businesses to continue make long-term investments and thus compromised one of the institutional pillars supporting long-term investment and productivity growth. Despite a recovery, expectations of future aggregate demand are now held with less confidence – higher perceived variance – than previously, thereby reducing entrepreneurial willingness to commit to the long-term capital expansion that increases productivity.

Scott is skeptical of the argument, because productivity growth had already started to decline after the 2001 downturn. Of course, one could argue that geopolitical uncertainty after the 9/11 attack and the invasions of Afghanistan and Iraq could have had a similar depressing effect on investment well before the 2007 downturn. So the decline in productivity growth that was underway at the time of the 2007 downturn is not necessarily inconsistent with Nick’s basic story. But Scott at least partially defends himself against that response by showing that real long-term investment as a share of GDP rose sharply after the 2001 downturn and was well above the levels of 1950s and 1960s.

Seeing no reason why the pace of productivity growth couldn’t have been affected by both cyclical and secular forces, I am happy to agree with both Nick and Scott. But I also have my own theory about the slowdown in productivity growth, which I have discussed previously, so this seems like a good time to weigh in again on the topic. As I pointed out in a 2015 post, one characteristic that distinguishes the 2007-09 downturn from earlier downturns is that it was associated with relatively large sectoral shifts in demand. Thus, the 2007-09 downturn was characterized by a higher percentage of jobs lost in the downturn that were not subsequently restored than was the case in earlier downturns. In earlier downturns, the decline in aggregate demand caused workers to be laid off temporarily from their jobs when demand and output fell, but a large percentage of laid-off workers were later rehired by their former when demand and output recovered. And even many of those laid-off workers that weren’t rehired by their previous employers still eventually found jobs doing work very similar to what they had been doing before losing their old jobs.

The depth and the severity of recessions can be measured not just by the unemployment rate, but also by the long-term unemployment rate. What set the 2007-09 downturn and the recovery apart from earlier downturns — even the 1981-82 downturn, in which the unemployment rate rose to almost 11 percent, higher than the 10 percent rate at depth of the 2007-09 downturn – was a long-term unemployment rate substantially higher, followed by a slower rate of decline, than in any post-World-War II downturn. I quote from a recent article on long-term unemployment

In January 2017, there were 1.85 million long-term unemployed. The number first dropped below two million in May 2015. That means 24.2 percent of the unemployed have been looking for work for six months. That’s better than the record high of 46 percent in the second quarter of 2010.

Sadly, it’s barely better than the darkest days of the 1981 recession. At that point, 26 percent of the unemployed were out of work for more than six months. On the other hand, total unemployment was worse than it is today. There was a 10.8 percent overall unemployment rate. In other words, the Great Recession created a higher percent of long-term unemployment.(Source: “Potential Causes and Implications of the Rise in Long-Term Unemployment,” The Federal Reserve Bank of Richmond, September 2011.)

Here’s how I put it in 2015.

[T]he 2008-09 downturn was associated with major sectoral shifts that caused an unusually large reallocation of labor from industries like construction and finance to other industries so that an unusually large number of workers have had to find new jobs doing work different from what they were doing previously. In many recessions, laid-off workers are either re-employed at their old jobs or find new jobs doing basically the same work that they had been doing at their old jobs. When workers transfer from one job to another similar job, there is little reason to expect a decline in their productivity after they are re-employed, but when workers are re-employed doing something very different from what they did before, a significant drop in their productivity in their new jobs is likely.

In addition, the number of long-term unemployed (27 weeks or more) since the 2000-09 downturn has been unusually high. Workers who remain unemployed for an extended period of time tend to suffer an erosion of skills, causing their productivity to drop when they are re-employed even if they are able to find a new job in their old occupation. It seems likely that the percentage of long-term unemployed workers that switch occupations is larger than the percentage of short-term unemployed workers that switch occupations, so the unusually high rate of long-term unemployment has probably had a doubly negative effect on labor productivity.

Long-term unemployment has adverse effects on health and many other metrics of well-being, effects that aren’t confined to the unemployed, but extend to their families, friends and communities. An increase in long-term unemployment, even if originally caused by an aggregate demand shock, is associated with a long-term negative supply shock. So it’s not surprising that the unusually and persistently high rate of long-term unemployment after the 2007-09 downturn, causing a massive loss of human capital, has depressed the subsequent growth in labor productivity. In my 2015 post, I tried to provide an optimistic interpretation of this phenomenon, but my optimism was misplaced, because the damage inflicted by long-term unemployment is very often irreversible, and rates of long-term unemployment have remained stubbornly high notwithstanding the steady decline in the overall unemployment rate.

Accounting for a disproportionate share of the long-term unemployed, discouraged older workers, chronically unable to find new jobs, have prematurely departed from the labor force. These older workers have presumably been replaced by younger entrants into the labor force, and one would suppose that the productivity of the younger workers is, on average, substantially lower than the productivity of the older and more experienced workers whom they have replaced, though presumably as they gain experience and acquire skills, the productivity of new workers will rise over time. Thus the demographic shift in the labor force is another reason for the low productivity growth since the 2007-09 downturn. But that effect, though largely demographic, has also had a cyclical component, making it difficult to disentangle the cyclical from the secular causes of sluggish productivity growth.

That difficulty is further compounded by another contributory cause of slow productivity growth. In my 2016 post, I discussed the late Walter Oi’s idea that labor is not really a variable factor of production as it is typically treated in simplified models, but a quasi-fixed factor. Here’s how Oi explained the idea:

For analytic purposes fixed employment costs can be separated into two categories called, for convenience, hiring and training costs. Hiring costs are defined as those costs that have no effect on a worker’s productivity and include outlays for recruiting, for processing payroll records, and for supplements such as unemployment compensation. These costs are closely related to the number of new workers and only indirectly related to the flow of labor’s services Training expenses, on the other hand, are investments in the human agent, specifically designed to improve a worker’s productivity.

The training activity typically entails direct money outlays as well as numerous implicit costs such as the allocation of old workers to teaching skills and rejection of unqualified workers during the training period.

So, if the 2007-09 downturn and the recovery was associated with an unusually high flow of workers from old jobs into new jobs, there has been an unusually high level of training expenses incurred by firms as they have brought workers into new jobs. The large investments by firms in training new workers have inevitably caused measured labor productivity to lag below previous trends when the fraction of workers entering the labor force or requiring new training to learn new skills was likely less than it has been since 2009. This idea, at any rate, does provide some reason to hope for at least a modest improvement in productivity and economic growth over time, even if the human cost of almost a decade of extremely high long-term unemployment is now largely irremediable and irretrievable.

Richard Lipsey and the Phillips Curve Redux

Almost three and a half years ago, I published a post about Richard Lipsey’s paper “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” The paper originally presented at the 2013 meeting of the History of Econmics Society has just been published in the Journal of the History of Economic Thought, with a slightly revised title “The Phillips Curve and an Assumed Unique Macroeconomic Equilibrium in Historical Context.” The abstract of the revised published version of the paper is different from the earlier abstract included in my 2013 post. Here is the new abstract.

An early post-WWII debate concerned the most desirable demand and inflationary pressures at which to run the economy. Context was provided by Keynesian theory devoid of a full employment equilibrium and containing its mainly forgotten, but still relevant, microeconomic underpinnings. A major input came with the estimates provided by the original Phillips curve. The debate seemed to be rendered obsolete by the curve’s expectations-augmented version with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with stable inflation. The current behavior of economies with the successful inflation targeting is inconsistent with this natural-rate view, but is consistent with evolutionary theory in which economies have a wide range of GDP-compatible stable inflation. Now the early post-WWII debates are seen not to be as misguided as they appeared to be when economists came to accept the assumptions implicit in the expectations-augmented Phillips curve.

Publication of Lipsey’s article nicely coincides with Roger Farmer’s new book Prosperity for All which I discussed in my previous post. A key point that Roger makes is that the assumption of a unique equilibrium which underlies modern macroeconomics and the vertical long-run Phillips Curve is neither theoretically compelling nor consistent with the empirical evidence. Lipsey’s article powerfully reinforces those arguments. Access to Lipsey’s article is gated on the JHET website, so in addition to the abstract, I will quote the introduction and a couple of paragraphs from the conclusion.

One important early post-WWII debate, which took place particularly in the UK, concerned the demand and inflationary pressures at which it was best to run the economy. The context for this debate was provided by early Keynesian theory with its absence of a unique full-employment equilibrium and its mainly forgotten, but still relevant, microeconomic underpinnings. The original Phillips Curve was highly relevant to this debate. All this changed, however, with the introduction of the expectations-augmented version of the curve with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with a stable inflation rate. This new view of the economy found easy acceptance partly because most economists seem to feel deeply in their guts — and their training predisposes them to do so — that the economy must have a unique equilibrium to which market forces inevitably propel it, even if the approach is sometimes, as some believe, painfully slow.

The current behavior of economies with successful inflation targeting is inconsistent with the existence of a unique non-accelerating-inflation rate of unemployment (NAIRU) but is consistent with evolutionary theory in which the economy is constantly evolving in the face of path-dependent, endogenously generated, technological change, and has a wide range of unemployment and GDP over which the inflation rate is stable. This view explains what otherwise seems mysterious in the recent experience of many economies and makes the early post-WWII debates not seem as silly as they appeared to be when economists came to accept the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment. One thing that stands in the way of accepting this view, however, the tyranny of the generally accepted assumption of a unique, self-sustaining macroeconomic equilibrium.

This paper covers some of the key events in the theory concerning, and the experience of, the economy’s behavior with respect to inflation and unemployment over the post-WWII period. The stage is set by the pressure-of-demand debate in the 1950s and the place that the simple Phillips curve came to play in it. The action begins with the introduction of the expectations-augmented Phillips curve and the acceptance by most Keynesians of its implication of a unique, self-sustaining macro equilibrium. This view seemed not inconsistent with the facts of inflation and unemployment until the mid-1990s, when the successful adoption of inflation targeting made it inconsistent with the facts. An alternative view is proposed, on that is capable of explaining current macro behavior and reinstates the relevance of the early pressure-of-demand debate. (pp. 415-16).

In reviewing the evidence that stable inflation is consistent with a range of unemployment rates, Lipsey generalizes the concept of a unique NAIRU to a non-accelerating-inflation band of unemployment (NAIBU) within which multiple rates of unemployment are consistent with a basically stable expected rate of inflation. In an interesting footnote, Lipsey addresses a possible argument against the relevance of the empirical evidence for policy makers based on the Lucas critique.

Some might raise the Lucas critique here, arguing that one finds the NAIBU in the data because policymakers are credibly concerned only with inflation. As soon as policymakers made use of the NAIBU, the whole unemployment-inflation relation that has been seen since the mid-1990s might change or break. For example, unions, particularly in the European Union, where they are typically more powerful than in North America, might alter their behavior once they became aware that the central bank was actually targeting employment levels directly and appeared to have the power to do so. If so, the Bank would have to establish that its priorities were lexicographically ordered with control of inflation paramount so that any level-of-activity target would be quickly dropped whenever inflation threatened to go outside of the target bands. (pp. 426-27)

I would just mention in this context that in this 2013 post about the Lucas critique, I pointed out that in the paper in which Lucas articulated his critique, he assumed that the only possible source of disequilibrium was a mistake in expected inflation. If everything else is working well, causing inflation expectations to be incorrect will make things worse. But if there are other sources of disequilibrium, it is not clear that incorrect inflation expectations will make things worse; they could make things better. That is a point that Lipsey and Kelvin Lancaster taught the profession in a classic article “The General Theory of Second Best,” 20 years before Lucas published his critique of econometric policy evaluation.

I conclude by quoting Lipsey’s penultimate paragraph (the final paragraph being a quote from Lipsey’s paper on the Phillips Curve from the Blaug and Lloyd volume Famous Figures and Diagrams in Economics which I quoted in full in my 2013 post.

So we seem to have gone full circle from the early Keynesian view in which there was no unique level of GDP to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade-0ff, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of GDP, and finally back to the early Keynesian view in which policymakers had an option as to the average pressure of aggregate demand at which economic activity could be sustained. However, the modern debated about whether to aim for [the high or low range of stable unemployment rates] is not a debate about inflation versus growth, as it was in the 1950s, but between those who would risk an occasional rise of inflation above the target band as the price of getting unemployment as low as possible and those who would risk letting unemployment fall below that indicated by the lower boundary of the NAIBU  as the price of never risking an acceleration of inflation above the target rate. (p. 427)

Roger Farmer’s Prosperity for All

I have just read a review copy of Roger Farmer’s new book Prosperity for All, which distills many of Roger’s very interesting ideas into a form which, though readable, is still challenging — at least, it was for me. There is a lot that I like and agree with in Roger’s book, and the fact that he is a UCLA economist, though he came to UCLA after my departure, is certainly a point in his favor. So I will begin by mentioning some of the things that I really liked about Roger’s book.

What I like most is that he recognizes that beliefs are fundamental, which is almost exactly what I meant when I wrote this post (“Expectations Are Fundamental”) five years ago. The point I wanted to make is that the idea that there is some fundamental existential reality that economic agents try — and, if they are rational, will — perceive is a gross and misleading oversimplification, because expectations themselves are part of reality. In a world in which expectations are fundamental, the Keynesian beauty-contest theory of expectations and stock prices (described in chapter 12 of The General Theory) is not absurd as it is widely considered to be believers in the efficient market hypothesis. The almost universal unprofitability of simple trading rules or algorithms is not inconsistent with a market process in which the causality between prices and expectations goes in both directions, in which case anticipating expectations is no less rational than anticipating future cash flows.

One of the treats of reading this book is Farmer’s recollections of his time as a graduate student at Penn in the early 1980s when David Cass, Karl Shell, and Costas Azariadis were developing their theory of sunspot equilibrium in which expectations are self-fulfilling, an idea skillfully deployed by Roger to revise the basic New Keynesian model and re-orient it along a very different path from the standard New Keynesian one. I am sympathetic to that reorientation, and the main reason for that re-orientation is that Roger rejects the idea that there is a unique equilibrium to which the economy automatically reverts, albeit somewhat more slowly than if speeded along by the appropriate monetary policy, on its own. The notion that there is a unique equilibrium to which the economy automatically reverts is an assumption with no basis in theory or experience. The most that the natural-rate hypothesis can tell us is that if an economy is operating at its natural rate of unemployment, monetary expansion cannot permanently reduce the rate of unemployment below that natural rate. Eventually — once economic agents come to expect that the monetary expansion and the correspondingly higher rate of inflation will be maintained indefinitely — the unemployment rate must revert to the natural rate. But the natural-rate hypothesis does not tell us that monetary expansion cannot reduce unemployment when the actual unemployment rate exceeds the natural rate, although it is often misinterpreted as making that assertion.

In his book, Roger takes the anti-natural-rate argument a step further, asserting that the natural rate of unemployment rate is not unique. There is actually a range of unemployment rates at which the economy can permanently remain; which of those alternative natural rates the economy winds up at depends on the expectations held by the public about nominal future income. The higher expected future income, the greater consumption spending and, consequently, the greater employment. Things are a bit more complicated than I have just described them, because Roger also believes that consumption depends not on current income but on wealth. However, in the very simplified model with which Roger operates, wealth depends on expectations about future income. The more optimistic people are about their income-earning opportunities, the higher asset values; the higher asset values, the wealthier the public, and the greater consumption spending. The relationship between current income and expected future income is what Roger calls the belief function.

Thus, Roger juxtaposes a simple New Keynesian model against his own monetary model. The New Keynesian model consists of 1) an investment equals saving equilibrium condition (IS curve) describing the optimal consumption/savings decision of the representative individual as a locus of combinations of expected real interest rates and real income, based on the assumed rate of time preference of the representative individual, expected future income, and expected future inflation; 2) a Taylor rule describing how the monetary authority sets its nominal interest rate as a function of inflation and the output gap and its target (natural) nominal interest rate; 3) a short-run Phillips Curve that expresses actual inflation as a function of expected future inflation and the output gap. The three basic equations allow three endogenous variables, inflation, real income and the nominal rate of interest to be determined. The IS curve represents equilibrium combinations of real income and real interest rates; the Taylor rule determines a nominal interest rate; given the nominal rate determined by the Taylor rule, the IS curve can be redrawn to represent equilibrium combinations of real income and inflation. The intersection of the redrawn IS curve with the Phillips curve determines the inflation rate and real income.

Roger doesn’t like the New Keynesian model because he rejects the notion of a unique equilibrium with a unique natural rate of unemployment, a notion that I have argued is theoretically unfounded. Roger dismisses the natural-rate hypothesis on empirical grounds, the frequent observations of persistently high rates of unemployment being inconsistent with the idea that there are economic forces causing unemployment to revert back to the natural rate. Two responses to this empirical anomaly are possible: 1) the natural rate of unemployment is unstable, so that the observed persistence of high unemployment reflect increases in the underlying but unobservable natural rate of unemployment; 2) the adverse economic shocks that produce high unemployment are persistent, with unemployment returning to a natural level only after the adverse shocks have ceased. In the absence of independent empirical tests of the hypothesis that the natural rate of unemployment has changed, or of the hypothesis that adverse shocks causing unemployment to rise above the natural rate are persistent, neither of these responses is plausible, much less persuasive.

So Roger recasts the basic New Keynesian model in a very different form. While maintaining the Taylor Rule, he rewrites the IS curve so that it describes a relationship between the nominal interest rate and the expected growth of nominal income given the assumed rate of time preference, and in place of the Phillips Curve, he substitutes his belief function, which says that the expected growth of nominal income in the next period equals the current rate of growth. The IS curve and the Taylor Rule provide two steady state equations in three variables, nominal income growth, nominal interest rate and inflation, so that the rate of inflation is left undetermined. Once the belief function specifies the expected rate of growth of nominal income, the nominal interest rate consistent with expected nominal-income growth is determined. Since the belief function tells us only that the expected nominal-income growth equals the current rate of nominal-income growth, any change in nominal-income growth persists into the next period.

At any rate, Roger’s policy proposal is not to change the interest-rate rule followed by the monetary authority, but to propose a rule whereby the monetary authority influences the public’s expectations of nominal-income growth. The greater expected nominal-income growth, the greater wealth, and the greater consumption expenditures. The greater consumption expenditures, the greater income and employment. Expectations are self-fulfilling. Roger therefore advocates a policy by which the government buys and sells a stock-market index fund in order to keep overall wealth at a level that will generate enough consumption expenditures to support maximum sustainable employment.

This is a quick summary of some of the main substantive arguments that Roger makes in his book, and I hope that I have not misrepresented them too badly. As I have already said, I very much sympathize with his criticism of the New Keynesian model, and I agree with nearly all of his criticisms. I also agree wholeheartedly with his emphasis on the importance of expectations and on self-fulfilling character of expectations. Nevertheless, I have to admit that I have trouble taking Roger’s own monetary model and his policy proposal for stabilizing a broad index of equity prices over time seriously. And the reason I am so skeptical about Roger’s model and his policy recommendation is that his model, which does after all bear at least a family resemblance to the simple New Keynesian model, strikes me as being far too simplified to be credible as a representation of a real-world economy. His model, like the New Keynesian model, is an intertemporal model with neither money nor real capital, and the idea that there is an interest rate in such model is, though theoretically defensible, not very plausible. There may be a sequence of periods in such a model in which some form of intertemporal exchange takes place, but without explicitly introducing at least one good that is carried over from period to period, the extent of intertemporal trading is limited and devoid of the arbitrage constraints inherent in a system in which real assets are held from one period to the next.

So I am very skeptical about any macroeconomic model with no market for real assets so that the interest rate interacts with asset values and expected future prices in such a way that the existing stock of durable assets is willingly held over time. The simple New Keynesian model in which there is no money and no durable assets, but simply bonds whose existence is difficult to rationalize in the absence of money or durable assets, does not strike me as a sound foundation for making macroeconomic policy. An interest rate may exist in such a model, but such a model strikes me as woefully inadequate for macroeconomic policy analysis. And although Roger has certainly offered some interesting improvements on the simple New Keynesian model, I would not be willing to rely on Roger’s monetary model for the sweeping policy and institutional recommendations that he proposes, especially his proposal for stabilizing the long-run growth path of a broad index of stock prices.

This is an important point, so I will try to restate it within a wider context. Modern macroeconomics, of which Roger’s model is one of the more interesting examples, flatters itself by claiming to be grounded in the secure microfoundations of the Arrow-Debreu-McKenzie general equilibrium model. But the great achievement of the ADM model was to show the logical possibility of an equilibrium of the independently formulated, optimizing plans of an unlimited number of economic agents producing and trading an unlimited number of commodities over an unlimited number of time periods.

To prove the mutual consistency of such a decentralized decision-making process coordinated by a system of equilibrium prices was a remarkable intellectual achievement. Modern macroeconomics deceptively trades on the prestige of this achievement in claiming to be founded on the ADM general-equilibrium model; the claim is at best misleading, because modern macroeconomics collapses the multiplicity of goods, services, and assets into a single non-durable commodity, so that the only relevant plan the agents in the modern macromodel are called upon to make is a decision about how much to spend in the current period given a shared utility function and a shared production technology for the single output. In the process, all the hard work performed by the ADM general-equilibrium model in explaining how a system of competitive prices could achieve an equilibrium of the complex independent — but interdependent — intertemporal plans of a multitude of decision-makers is effectively discarded and disregarded.

This approach to macroeconomics is not microfounded, but its opposite. The approach relies on the assumption that all but a very small set of microeconomic issues are irrelevant to macroeconomics. Now it is legitimate for macroeconomics to disregard many microeconomic issues, but the assumption that there is continuous microeconomic coordination, apart from the handful of potential imperfections on which modern macroeconomics chooses to focus is not legitimate. In particular, to collapse the entire economy into a single output, implies that all the separate markets encompassed by an actual economy are in equilibrium and that the equilibrium is maintained over time. For that equilibrium to be maintained over time, agents must formulate correct expectations of all the individual relative prices that prevail in those markets over time. The ADM model sidestepped that expectational problem by assuming that a full set of current and forward markets exists in the initial period and that all the agents participating in the economy are present and endowed with wealth enabling them to trade in the initial period. Under those rather demanding assumptions, if an equilibrium price vector covering all current and future markets is arrived at, the optimizing agents will formulate a set of mutually consistent optimal plans conditional on that vector of equilibrium prices so that all the optimal plans can and will be carried out as time happily unfolds for as long as the agents continue in their blissful existence.

However, without a complete set of current and forward markets, achieving the full equilibrium of the ADM model requires that agents formulate consistent expectations of the future prices that will be realized only over the course of time not in the initial period. Roy Radner, who extended the ADM model to accommodate the case of incomplete markets, called such a sequential equilibrium, an equilibrium of plans, prices and expectations. The sequential equilibrium described by Radner has the property that expectations are rational, but the assumption of rational expectations for all future prices over a sequence of future time periods is so unbelievably outlandish as an approximation to reality — sort of like the assumption that it could be 76 degrees fahrenheit in Washington DC in February — that to build that assumption into a macroeconomic model is an absurdity of mind-boggling proportions. But that is precisely what modern macroeconomics, in both its Real Business Cycle and New Keynesian incarnations, has done.

If instead of the sequential equilibrium of plans, prices and expectations, one tries to model an economy in which the price expectations of agents can be inconsistent, while prices adjust within any period to clear markets – the method of temporary equilibrium first described by Hicks in Value and Capital – one can begin to develop a richer conception of how a macroeconomic system can be subject to the financial disturbances, and financial crises to which modern macroeconomies are occasionally, if not routinely, vulnerable. But that would require a reorientation, if not a repudiation, of the path on which macroeconomics has been resolutely marching for nigh on forty years. In his 1984 paper “Consistent Temporary Equilibrium,” published in a volume edited by J. P. Fitoussi, C. J. Bliss made a start on developing such a macroeconomic theory.

There are few economists better equipped than Roger Farmer to lead macroeconomics onto a new and more productive path. He has not done so in this book, but I am hoping that, in his next one, he will.

A Tutorial for Judy Shelton on the ABCs of Currency Manipulation

Currency manipulation has become a favorite bugbear of critics of both monetary policy and trade policy. Some claim that countries depress their exchange rates to give their exporters an unfair advantage in foreign markets and to insulate their domestic producers from foreign competition. Others claim that using monetary policy as a way to stimulate aggregate demand is necessarily a form of currency manipulation, because monetary expansion causes the currency whose supply is being expanded to depreciate against other currencies, making monetary expansion, ipso facto, a form of currency manipulation.

As I have already explained in a number of posts (e.g., here, here, and here) a theoretically respectable case can be made for the possibility that currency manipulation can be used as a form of covert protectionism without imposing either tariffs, quotas or obviously protectionist measures to favor the producers of one country against their foreign competitors. All of this was explained by the eminent international trade theorist Max Corden  over 30 years ago in a famous paper (“Exchange Rate Protection”). But to be able to make a credible case that currency manipulation is being practiced, it has to be shown that currency depreciation has been coupled with a restrictive monetary policy – either by reducing the supply of, or by increasing the demand for, base money. The charge that monetary expansion is ever a form of currency manipulation is therefore suspect on its face, and those who make accusations that countries are engaging in currency manipulation rarely bother to support the charge with evidence that currency deprection is being coupled with a restrictive monetary policy.

So it was no surprise to see in Tuesday’s Wall Street Journal that monetary-policy entrepreneur Dr. Judy Shelton has written another one of her screeds promoting the gold standard, in which, showing no awareness of the necessary conditions for currency manipulation, she assures us that a) currency manipulation is a real problem and b) that restoring the gold standard would solve it.

Certainly the rules regarding international exchange-rate arrangements are not working. Monetary integrity was the key to making Bretton Woods institutions work when they were created after World War II to prevent future breakdowns in world order due to trade. The international monetary system, devised in 1944, was based on fixed exchange rates linked to a gold-convertible dollar.

No such system exists today. And no real leader can aspire to champion both the logic and the morality of free trade without confronting the practice that undermines both: currency manipulation.

Ahem, pray tell, which rules relating to exchange-rate arrangements does Dr. Shelton believe are not working? She doesn’t cite any. And, what, on earth does “monetary integrity” even mean, and what does that high-minded, but totally amorphous, concept have to do with the rules of exchange-rate arrangements that aren’t working?

Dr. Shelton mentions “monetary integrity” in the context of the Bretton Woods system, a system based — well, sort of — on fixed exchange rates, forgetting – or choosing not — to acknowledge that, under the Bretton Woods system, exchange rates were also unilaterally adjustable by participating countries. Not only were they adjustable, but currency devaluations were implemented on numerous occasions as a strategy for export promotion, the most notorious example being Britain’s 30% devaluation of sterling in 1949, just five years after the Bretton Woods agreement had been signed. Indeed, many other countries, including West Germany, Italy, and Japan, also had chronically undervalued currencies under the Bretton Woods system, as did France after it rejoined the gold standard in 1926 at a devalued rate deliberately chosen to ensure that its export industries would enjoy a competitive advantage.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard. And the most egregious recent example of currency manipulation was undertaken by the Chinese central bank when it effectively pegged the yuan to the dollar at a fixed rate. Keeping its exchange rate fixed against the dollar was precisely the offense that the currency-manipulation police accused the Chinese of committing.

When governments manipulate exchange rates to affect currency markets, they undermine the honest efforts of countries that wish to compete fairly in the global marketplace. Supply and demand are distorted by artificial prices conveyed through contrived exchange rates. Businesses fail as legitimately earned profits become currency losses.

It is no wonder that appeals to free trade prompt cynicism among those who realize the game is rigged against them. Opposing the Trans-Pacific Partnership in June 2015, Rep. Debbie Dingell (D., Mich.) explained: “We can compete with anybody in the world. We build the best product. But we can’t compete with the Bank of Japan or the Japanese government.”

In other words, central banks provide useful cover for currency manipulation. Japan’s answer to the charge that it manipulates its currency for trade purposes is that movements in the exchange rate are driven by monetary policy aimed at domestic inflation and employment objectives. But there’s no denying that one of the primary “arrows” of Japan’s economic strategy under Prime Minister Shinzo Abe, starting in late 2012, was to use radical quantitative easing to boost the “competitiveness” of Japan’s exports. Over the next three years, the yen fell against the U.S. dollar by some 40%.

That sounds horrible, but Dr. Shelton conveniently forgets – or declines – to acknowledge that in September 2012, the yen had reached its post-war high against the dollar. Moreover, between September 2012 and September 2015, the trade weighted US dollar index in terms of major currencies rose by almost 25%, so most of the depreciation of the yen against the dollar reflected dollar appreciation rather than yen depreciation.

Now as I pointed out in a post in 2013 about Japan, there really were reasons to suspect that the Japanese were engaging in currency manipulation even though Japan’s rapid accumulation of foreign exchange reserves that began in 2009 came to a halt in 2012 before the Bank of Japan launched its quantitative easing program. I have not kept up on what policies the Bank of Japan has been following, so I am not going to venture an opinion about whether Japan is or is not a currency manipulator. But the evidence that Dr. Shelton is providing to support her charge is simply useless and irrelevant.

Last April, U.S. Treasury Secretary Jacob Lew cautioned Japan against using currency depreciation to gain a trade advantage and he placed the country on a the“monitoring list” of potential currency manipulators. But in response, Japanese Finance Minister Taro Aso threatened to raise the bar, saying he was “prepared to undertake intervention” in the foreign-exchange market.

Obviously, the US government responds to pressures from domestic interests harmed by Japanese competition. Whether such back and forth between the American Treasury Secretary and his Japanese counterpart signifies anything beyond routine grandstanding I am not in a position to say.

China has long been intervening directly in the foreign-exchange market to manipulate the value of its currency. The People’s Bank of China announces a daily midpoint for the acceptable exchange rate between the yuan and the dollar, and then does not allow its currency to move more than 2% from the target price. When the value of the yuan starts to edge higher than the desired exchange rate, China’s government buys dollars to push it back down. When the yuan starts to drift lower than the desired rate, it sells off dollar reserves to buy back its own currency.

China’s government has reserves that amount to nearly $3 trillion. According to Mr. Lew, the U.S. should mute its criticism because China has spent nearly $1 trillion to cushion the yuan’s fall over the last 2½ years or so. In a veiled reproach to Mr. Trump’s intention to label China a currency manipulator, Mr. Lew said it was “analytically dangerous” to equate China’s current intervention policies with its earlier efforts to devalue its currency for purposes of gaining a trade advantage. China, he noted, would only be open to criticism that is “intellectually sound.”

Whether China is propping up exchange rates or holding them down, manipulation is manipulation and should not be overlooked. To be intellectually consistent, one must acknowledge that the distortions induced by government intervention in the foreign-exchange market affect both trade and capital flows. A country that props up the value of its currency against the dollar may have strategic goals for investing in U.S. assets.

Far from being intellectually consistent, Dr. Shelton is rushing headlong into intellectual incoherence. She has latched on to the mantra of “currency manipulation,” and she will not let go. How does Dr. Shelton imagine that the fixed exchange rates of the Bretton Woods era, for which she so fervently pines, were maintained?

I have no idea what she might be thinking, but the answer is that they were maintained by intervention into currency markets to keep exchange rates from deviating by more than a minimal amount from their target rates. So precisely the behavior that, under the Bretton Woods system, she extols wholeheartedly, she condemns mindlessly when now undertaken by the Chinese.

Again, my point is not that the Chinese have not engaged in exchange-rate protection in the past. I have actually suggested in earlier posts to which I have hyperlinked above that the Chinese have engaged in that practice. But that no longer appears to be the case, and Dr. Shelton is clearly unable to provide any evidence that the Chinese are still engaging in that practice.

 [T]he . . . first step [to take] to address this issue [is] by questioning why there aren’t adequate rules in place to keep countries from manipulating their exchange rates.

The next step is to establish a universal set of rules based on monetary sovereignty and discipline that would allow nations to voluntarily participate in a trade agreement that did not permit them to undermine true competition by manipulating exchange rates.

I have actually just offered such a rule in case Dr. Shelton is interested. But I have little hope and no expectation that she is or will be.

The Incoherence and Bad Faith of Antonin Scalia’s Originalism — Updated

UPDATE: I just realized that yesterday I mistakenly published a rough draft of this post instead of the version that I had intended to publish. I apologize for that unforced error.

My previous post about judge-made law was inspired by a comment by Scott Sumner on the post before that about Judge Gorsuch. Well, another commenter, gofx, who commented on the post about judge-made law, has inspired this post. Let’s see how long we can keep this recursive equilibrium going. Here’s what gofx had to say:

David, I think your original post criticizing Gorsuch for a “monumental denial of reality” is confusing a normative statement and a positive statement. Textualists, like Scalia and others try to balance the effects common law, statutory, and executive (administrative) law. Yes, English common law is one of the bases of American law. But even the supreme court placed limits on federal judges creating common law with respect to certain areas of state law (Erie Railroad Co. v. Tompkins). So while common law remains important, judges are no longer the King’s agents attempting to standardize decisions and principles across the realm. Along came democracy, legislatures and executive-branch regulations. There is still plenty of scope for common law, but there is more and more “prescribed” laws and rules.

I agree that there is a problem here with confusing “normative” and “positive” statements about the law and the role of judges in making – or not making – law. But I don’t think that the confusion is mine. This is an important point, which will come up again below. But first, let me quote further from gofx’s comment:

Here is Scalia in “Common Law Courts in a Civil Law System: The Role of United States Federal Courts in Interpreting the Constitution and Laws:”

But though I have no quarrel with the common law and its process, I do question whether the attitude of the common-law judge – the mind-set that asks, “What is the most desirable resolution of this case, and how can any impediments to the achievement of that result be evaded?”– is appropriate for most of the work that I do, and much of the work that state judges do. We live in an age of legislation, and most new law is statutory law. As one legal historian has put it, in modern times “the main business of government, and therefore of law, [is] legislative and executive …. Even private law, so-called, [has been] turning statutory. The lion’s share of the norms and rules that actually govern[} the country [come) out of Congress and the legislatures. . . . The rules of the countless administrative agencies [are] themselves an important, even crucial, source of law.” This is particularly true in the federal courts, where, with a qualification so small it does not bear• mentioning, there is no such thing as common law.”

I am grateful for the reference to this essay based on two lectures given by Scalia in 2010, which I have now read for the first time. The first thing to note about the lecture is that despite his disclaimer about having “no quarrel with the common law and its process,” Scalia adopts an almost uniformly derogatory and disdainful attitude toward the common law and especially toward common-law judges; the disdain, bordering on contempt, is palpable. Here are some examples aside from the one gofx kindly provided:

As I have described, this system of making law by judicial opinion, and making law by distinguishing earlier cases, is what every American law student, what every newborn American lawyer, first sees when he opens his eyes. And the impression remains with him for life. His image of the great judge — the Holmes, the Cardozo — is the man (or woman) who has the intelligence to know what is the best rule of law to govern the case at hand, and then the skill to perform the broken-field running through earlier cases that leaves him free to impose that rule — distinguishing one prior case on his left, straight-arming another one on his right, high-stepping away from another precedent about to tackle him from the rear, until (bravo!) he reaches his goal: good law. That image of the great judge remains with the former law student when he himself becomes a judge, and thus the common-law tradition is passed on and on.

[T]he subject of statutory interpretation deserves study and attention in its own right, as the principal business of lawyers and judges. It will not do to treat the enterprise as simply an inconvenient modern add-on to the judges’ primary role of common-law lawmaking. Indeed, attacking the enterprise with the Mr. Fix-it mentality of the common-law judge is a sure recipe for incompetence and usurpation.

But the Great Divide with regard to constitutional interpretation is not that between Framers’ intent and objective meaning; but rather that between original meaning (whether derived from Framers’ intent or not) and current meaning. The ascendant school of constitutional interpretation affirms the existence of what is called the “living Constitution,” a body of law that (unlike normal statutes) grows and changes from age to age, in order to meet the needs of a changing society. And it is the judges who determine those needs and “find” that changing law. Seems familiar, doesn’t it? Yes, it is the common law returned, but infinitely more powerful than what the old common law ever pretended to be, for now it trumps even the statutes of democratic legislatures.

If you go into a constitutional law class, or study a constitutional-law casebook, or read a brief filed in a constitutional-law case, you will rarely find the discussion addressed to the text of the constitutional provision that is at issue, or to the question of what was the originally understood or even the originally intended meaning of that text. Judges simply ask themselves (as a good common-law judge would) what ought the result to be, and then proceed to the task of distinguishing (or, if necessary, overruling) any prior Supreme Court cases that stand in the way. Should there be (to take one of the less controversial examples) a constitutional right to die? If so, there is. Should there be a constitutional right to reclaim a biological child put out for adoption by the other parent? Again, if so, there is. If it is good, it is so. Never mind the text that we are supposedly construing; we will smuggle these in, if all else fails, under the Due Process Clause (which, as I have described, is textually incapable of containing them). Moreover, what the Constitution meant yesterday it does not necessarily mean today. As our opinions say in the context of our Eighth Amendment jurisprudence (the Cruel and Unusual Punishments Clause), its meaning changes to reflect “the evolving standards of decency that mark the progress of a maturing society.”

This is preeminently a common-law way of making law, and not the way of construing a democratically adopted text. . . . The Constitution, however, even though a democratically adopted text, we formally treat like the common law. What, it is fair to ask, is our justification for doing so?

The apparent reason for Scalia’s disdain for common-law judging is basically that judges, rather than deferring to the popular will expressed through legislation, presume to think that they can somehow figure out what the right, or best, decision is rather than mechanically follow the text of a statute enacted by a democratic legislature. Scalia hates judges who think for themselves, because, by thinking for themselves, they betray an insufferable elitisim instead of dutifully deferring to democratically elected legislators through whom the popular will is faithfully expressed. For Scalia it is the only the popular will that matters, the rights and interests of the litigants appearing before the judge being of little consequence compared to upholding the statutory text, the authoritative articulation of the popular will. Moreover, even if the statutes don’t achieve the right result, the people can at least read the statutes and regulations and know what the law says and how it will be enforced. And how can the people ever know what those high and mighty judges will decide to do next? And we all know — do we not? — the countless hours of their spare time spent in libraries and on-line by the unwashed masses poring over the latest additions to US Code and the Federal Register. Just think how all those long hours devoted to reading the US Code and the Federal Register would be wasted if those arrogant judges could simply ignore the plain meaning of the statutes and regulations and were allowed to use their own judgment in deciding cases.

I will forego, at least for now, indulging my desire to comment on Scalia’s critique of common-law judging. I want to focus instead on the positive case that Scalia makes for his textualist theory of statutory interpretation. To do so, let me quote liberally from Richard Posner’s withering 2012 review of Scalia’s treatise (co-authored by Bryan Garner), Reading the Law: The Interpretation of Legal Texts, which exposes the both the incoherence and the bad faith of Scalia’s textualist arguments. The entire review is worthy of careful study, but I will pick out a few paragraphs that highlight Scalia’s tortured relationship with and attitude toward the common law.

Judges like to say that all they do when they interpret a constitutional or statutory provision is apply, to the facts of the particular case, law that has been given to them. They do not make law: that is the job of legislators, and for the authors and ratifiers of constitutions. They are not Apollo; they are his oracle. They are passive interpreters. Their role is semantic.

The passive view of the judicial role is aggressively defended in a new book by Justice Antonin Scalia and the legal lexicographer Bryan Garner (Reading Law: The Interpretation of Legal Texts, 2012). They advocate what is best described as textual originalism, because they want judges to “look for meaning in the governing text, ascribe to that text the meaning that it has borne from its inception, and reject judicial speculation about both the drafters’ extra-textually derived purposes and the desirability of the fair reading’s anticipated consequences.” This austere interpretive method leads to a heavy emphasis on dictionary meanings, in disregard of a wise warning issued by Judge Frank Easterbrook, who though himself a self-declared textualist advises that “the choice among meanings [of words in statutes] must have a footing more solid than a dictionary—which is a museum of words, an historical catalog rather than a means to decode the work of legislatures.” Scalia and Garner reject (before they later accept) Easterbrook’s warning. Does an ordinance that says that “no person may bring a vehicle into the park” apply to an ambulance that enters the park to save a person’s life? For Scalia and Garner, the answer is yes. After all, an ambulance is a vehicle—any dictionary will tell you that. If the authors of the ordinance wanted to make an exception for ambulances, they should have said so. And perverse results are a small price to pay for the objectivity that textual originalism offers (new dictionaries for new texts, old dictionaries for old ones). But Scalia and Garner later retreat in the ambulance case, and their retreat is consistent with a pattern of equivocation exhibited throughout their book. . . .

Another interpretive principle that Scalia and Garner approve is the presumption against the implied repeal of state statutes by federal statutes. They base this “on an assumption of what Congress, in our federal system, would or should normally desire.” What Congress would desire? What Congress should desire? Is this textualism, too?

And remember the ambulance case? Having said that the conclusion that an ambulance was forbidden to enter the park even to save a person’s life was entailed by textual originalism and therefore correct, Scalia and Garner remark several hundred pages later that the entry of the ambulance is not prohibited after all, owing to the “common-law defense of necessity,” which they allow to override statutory text. Yet just four pages later they say that except in “select fields such as admiralty law, [federal courts] have no significant common-law powers.” And still elsewhere, tacking back again, they refer approvingly to an opinion by Justice Kennedy (Leegin Creative Leather Products, Inc. v. PSKS, Inc.), which states that “the Sherman Act’s use of ‘restraint of trade’ invokes the common law itself … not merely the static content that the common law had assigned to the term in 1890.” In other words, “restraint of trade” had a specific meaning (and it did: it meant “restraints on alienation”) in 1890 that judges are free to alter in conformity with modern economics—a form of “dynamic” interpretation that should be anathema to Scalia and Garner. A few pages later they say that “federal courts do not possess the lawmaking power of common-law courts,” ignoring not only the antitrust and ambulance cases but also the fact that most of the concepts deployed in federal criminal law—such as mens rea (intent), conspiracy, attempt, self-defense, and necessity—are common law concepts left undefined in criminal statutes.

Scalia and Garner indicate their agreement with a number of old cases that hold that an heir who murders his parents or others from whom he expects to inherit is not disqualified from inheriting despite the common law maxim that no person shall be permitted to profit from his wrongful act. (Notice how common law floats in and out of their analysis, unpredictably.) They say that these cases are “textually correct” though awful, and are happy to note that they have been overruled by statute. Yet just before registering their approval they had applauded the rule that allows the deadlines in statutes of limitations to be “tolled” (delayed) “because of unforeseen events that make compliance impossible.” The tolling rule is not statutory. It is a judicial graft on statutes that do not mention tolling. Scalia and Garner do not explain why that is permissible, but a judicial graft disqualifying a murdering heir is not.

Scalia and Garner defend the canon of construction that counsels judges to avoid interpreting a statute in a way that will render it unconstitutional, declaring that this canon is good “judicial policy.” Judicial policy is the antithesis of textual originalism. They note that “many established principles of interpretation are less plausibly based on a reasonable assessment of meaning than on grounds of policy adopted by the courts”—and they applaud those principles, too. They approve the principle that statutes dealing with the same subject should “if possible be interpreted harmoniously,” a principle they deem “based upon a realistic assessment of what the legislature ought to have meant,” which in turn derives from the “sound principles…that the body of the law should make sense, and…that it is the responsibility of the courts, within the permissible meanings of the text, to make it so” (emphasis added). In other words, judges should be realistic, should impose right reason on legislators, should in short clean up after the legislators.

I would just note in passing that Posner shows that the confusion between normative and positive which gofx in the comment above ascribed to me is obviously running rampant, if not amok, throughout Scalia’s treatise. But Posner’s evisceration of Scalia’s bad faith does not go far enough, because the bad faith extends beyond Scalia’s willingness to invoke (or smuggle in) common-law principles to cover up the gaps in his textualism. Scalia’s whole originalist doctrine that the text of the Constitution should be interpreted according to the original meaning of the text of the Constitution relies on the premise that the judicial interpretations of the Constitution had always been governed by the original meaning that had been universally attributed to the Constitutional text. It was only much later, say, in the middle of the twentieth century, on or about May 17, 1954, that the interpretation of the Constitution was perverted by the reprehensible judges and their academic handmaidens who invented the notion of a living constitution that adjusts to the “evolving standards of decency that mark the progress of a maturing society.” Let me quote once more from Posner’s review:

Scalia and Garner contend that textual originalism was the dominant American method of judicial interpretation until the middle of the twentieth century. The only evidence they provide, however, consists of quotations from judges and jurists, such as William Blackstone, John Marshall, and Oliver Wendell Holmes, who wrote before 1950. Yet none of those illuminati, while respectful of statutory and constitutional text, as any responsible lawyer would be, was a textual originalist. All were, famously, “loose constructionists.”

Scalia and Garner call Blackstone “a thoroughgoing originalist.” They say that “Blackstone made it very clear that original meaning governed.” Yet they quote in support the famous statement in his Commentaries on the Laws of England that “the fairest and most rational method to interpret the will of the legislator, is by exploring his intentions at the time when the law made, by signs the most natural and probable. And these signs are either the words, the context, the subject matter, the effects and consequence, or the spirit and reason of the law” (emphasis mine, except that the first “signs” is emphasized in the original). Blackstone adds that “the most universal and effectual way of discovering the true meaning of a law, when the words are dubious, is by considering the reason and spirit of it; or the cause which moved the legislator to enact it.”

Just so! But, once again, Posner goes too easy on Scalia, because Scalia’s whole premise in his essay on common law courts, to which gofx pointed me, is that the modern theories of Constitutional interpretation so abhorent to Scalia are basically extensions, albeit extreme extensions, of common-law judging in which the judge tries to find the best possible outcome for the case he that he is deciding, unconstrained by any statutory or Constitutional text. It is the lack of subordination by common-law judges to any authoritative legal text with a fixed meaning that they are bound to accept that is the ultimate heresy of which all common-law judges, in Scalia’s eyes, stand convicted. But when the US Constitution was ratified all the judges in America and Britain were common-law judges. And Blackstone’s magisterial Commentaries on the Laws of England was a four-volume paean to the common law of England. So, under Scalia’s own originalist doctrine, the meaning of the judiciary in the US Constitution, written by the Framers under Blackstone’s thrall, was the kind of judging practiced by common-law judges. The judges who interpreted the Constitution for almost two centuries after the Constitution was ratified were common-law judges and they were interpreting the Constitution using the very interpretative methods of common-law judges that Scalia so violently condemns.

Scalia has literally hoisted himself by his own originalist petard. Couldn’t have happened to a finer fellow.

Yes, Judges Do Make Law

Scott Sumner has just written an interesting comment to my previous post in which I criticized a remark made by Judge Gorsuch upon being nominated to fill the vacant seat on the Supreme Court — so interesting, in fact, that I think it is worth responding to him in a separate post.

First, here is the remark made by Judge Gorsuch to which I took exception.

I respect, too, the fact that in our legal order, it is for Congress and not the courts to write new laws. It is the role of judges to apply, not alter, the work of the people’s representatives. A judge who likes every outcome he reaches is very likely a bad judge . . . stretching for results he prefers rather than those the law demands.

I criticized Judge Gorsuch for denying what to me is the obvious fact that judges do make law. They make law, because the incremental effect of each individual decision results in a legal order that is different from the legislation that has been enacted by legislatures. Each decision creates a precedent that must be considered by other judges as they apply and construe the sum total of legislatively enacted statutes in light of, and informed by, the precedents of judges and the legal principles that have guided judges those precedents. Law-making by judges under a common law system — even a common law system in which judges are bound to acknowledge the authority of statutory law — is inevitable for many reasons, one but not the only reason being that statutes will sooner or later have to be applied in circumstances were not foreseen by that legislators who enacted those statutes.

To take an example of Constitutional law off the top of my head: is it an unreasonable search for the police to search the cell phone of someone they have arrested without first getting a search warrant? That’s what the Supreme Court had to decide two years ago in Riley v. California. The answer to that question could not be determined by reading the text of the Fourth Amendment which talks about the people being secure in their “persons, houses, papers, or effects” or doing a historical analysis of what the original understanding of the terms “search” and “seizure” and “papers and effects” was when the Fourth Amendment to the Constitution was enacted. Earlier courts had to decide whether government eavesdropping on phone calls violated the Fourth Amendment. And other courts have had to decide whether collecting meta data about phone calls is a violation. Answers to those legal questions can’t be found by reading the relevant legal text.

Here’s part of the New York Times story about the Supreme Court’s decision in Riley v. Califronia.

In a sweeping victory for privacy rights in the digital age, the Supreme Court on Wednesday unanimously ruled that the police need warrants to search the cellphones of people they arrest.

While the decision will offer protection to the 12 million people arrested every year, many for minor crimes, its impact will most likely be much broader. The ruling almost certainly also applies to searches of tablet and laptop computers, and its reasoning may apply to searches of homes and businesses and of information held by third parties like phone companies.

“This is a bold opinion,” said Orin S. Kerr, a law professor at George Washington University. “It is the first computer-search case, and it says we are in a new digital age. You can’t apply the old rules anymore.”

But he added that old principles required that their contents be protected from routine searches. One of the driving forces behind the American Revolution, Chief Justice Roberts wrote, was revulsion against “general warrants,” which “allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity.”

“The fact that technology now allows an individual to carry such information in his hand,” the chief justice also wrote, “does not make the information any less worthy of the protection for which the founders fought.”

Now for Scott’s comment:

I don’t see how Gorsuch’s view conflicts with your view. It seems like Gorsuch is saying something like “Judges should not legislate, they should interpret the laws.” And you are saying “the laws are complicated.” Both can be true!

Well, in a sense, maybe, because what judges do is technically not legislation. But they do make law; their opinions determine for the rest of us what we may legally do and what we may not legally do and what rights to expect will be respected  and what rights will not be respected. Judges can even change the plain meaning of a statute in order to uphold a more basic, if unwritten, principle of justice, which,under, the plain meaning of Judge Gorsuch’s remark (“It is the role of judges to apply, not alter, the work of the people’s representatives”) would have to be regarded as an abuse of judicial discretion. The absurdity of what I take to be Gorsuch’s position is beautifully illustrated by the case of Riggs v. Palmer which the late — and truly great — Ronald Dworkin discussed in his magnificent article “Is Law a System of Rules?” aka “The Model of Rules.” Here is the one paragraph in which Dworkin uses the Riggs case to show that judges apply not just specific legal rules (e.g., statutory rules), but also deeper principles that govern how those rules should be applied.

My immediate purpose, however, is to distinguish principles in the generic sense from rules, and I shall start by collecting some examples of the former. The examples I offer are chosen haphazardly; almost any case in a law school casebook would provide examples that would serve as well. In 1889, a New York court, in the famous case of Riggs v. Palmer, had to decide whether an heir named in the will of his grandfather could inherit under that will, even though he had murdered his grandfather to do so. The court began its reasoning with this admission: “It is quite true that statues regulating the making, proof and effect of wills, and the devolution of property, if literally construed [my emphasis], and if their force and effect can in no way and under no circumstances be controlled or modified, give this property to the murderer.” But the court continued to note that “all laws as well as all contracts may be controlled in their operation and effect by general, fundamental maxims of the common law. No one shall be permitted to profit by his own fraud, or to take advantage of his own wrong, or to found any claim upon his own iniquity, or to acquire property by his own crime.” The murderer did not receive his inheritance.

QED. In this case the Common law overruled the statute, and justice prevailed over injustice. Game, set, match to the judge!

Judge Gorsuch on “the Role of Judges” in our Legal System

Neil Gorsuch, nominated this week to fill the vacancy on the Supreme Court left by the demise of Antonin Scalia, is in many respects an impressive Judge on the tenth circuit Court of Appeals, receiving accolades and encomiums not only from his ideological allies but also from legal experts and scholars on the opposite end of the ideological spectrum. Besides a J.D. from Harvard, Gorsuch has a Ph.D. in law from Oxford, having written his doctoral dissertation on assisted suicide and euthanasia, a work subsequently published by Princeton University Press. A scholarly judge, known for well-crafted and lucid opinions, he is likely, if confirmed, to leave a lasting mark on the Supreme Court and on American jurisprudence.

So I was really disappointed, though not really surprised, to find out that Judge Gorsuch, at his public introduction at the White House on Tuesday evening, felt compelled to indulge in an abject ritual obeisance to the prevailing right-wing populist legal ideology, delivering the following willfully ignorant, ahistorical, misrepresentation of the role of judges in our Anglo/American, common law legal system.

I respect, too, the fact that in our legal order, it is for Congress and not the courts to write new laws. It is the role of judges to apply, not alter, the work of the people’s representatives. A judge who likes every outcome he reaches is very likely a bad judge . . . stretching for results he prefers rather than those the law demands.

How someone trained in the law both at Harvard and Oxford could so flagrantly mischaracterize what it is that judges do – a mischacterization of the same ilk as John Roberts’s infamous comparison, as a nominee for Chief Justice testifying before the Senate Judiciary Committee, of judges to baseball umpires calling balls and strikes – when the entire Anglo-American legal system and the whole of its jurisprudence rests on the foundation of the common law, a law made in its entirety by judges deciding cases according to their understanding of the principles of justice and their understanding of how earlier judges had decided similar cases in similar situations, a law that evolved slowly as an organic, living tradition over countless generations and many centuries, is simply beyond my comprehension.

With all due respect to Judge Gorsuch’s impressive legal scholarship, I consider his statement to be a monumental denial of reality, orders of magnitude beyond denying climate change or even evolution. It is a denial of the obvious on the level of a Ph.D. mathematician denying that two plus two equals four. But so ferocious and so intransigent are the demands of current right-wing populist legal ideology that failure to deny obvious historical reality would be regarded as an unpardonable sin and a damnable heresy, more than ample grounds for being rejected to fill a coveted seat on the Supreme Court.

I can almost hear the howls of protest emanating from the Federalist Society, which, in its mission statement, solemnly asserts “that it is emphatically the province and duty of the judiciary to state what the law is, not what it should be.” But what exactly is the meaning of “the law” in that ever-so emphatic pronouncement?

“The law” could mean a specific statute, ordinance, enactment, provision, article, or rule, which, if duly enacted by an appropriate law-making body, has “the force of law.” Or alternatively, “the law” could mean the entire body of law under which a rule of law is said to be in effect. Whether a specific statute, ordinance, enactment, provision, article, or rule exists is a purely factual question, and, for the most part, not a controversial one. When a question of law becomes controversial, it is rarely because people have forgotten the existence of a relevant statute, ordinance, or rule, of whose existence they must be reminded by a judge with a superior memory. Rather the question of law arises, because it is not clear which one of a number of alternative, potentially applicable rules should govern the outcome of the case at hand. And that question of law can rarely be answered – certainly not satisfactorily answered – simply by reminding the litigants that the law says such and such and so and so.

The real challenge confronting the judge – especially an appellate judge – is to determine which of the alternative, potentially applicable rules should determine the outcome of the case. And to answer that question, a judge can’t just look up what the law says, the judge has to consider how the entire legal system, including not just the explicit rules, but all the relevant previous judicial decisions and all the principles embodied in those decisions, comports with the decision that must be rendered. The judge deciding the case has to figure out how to make a ruling that best fits in with all those previous decisions and all their underlying principles. It is that best decision which is what “the law,” considered as an overall system, requires. But if that is what a judge is trying to do, it is simply nonsense – as in absurd and incoherent – to assert that the judge is stating “what the law is, not what it should be.”

To be sure, judges sometimes have to make decisions with which they are personally uncomfortable, judges never being possessed of unlimited discretion to rule as they please. But judging means a weighing of arguments and of conflicting values to arrive at the best possible decision under the circumstances — the decision most consistent with the entire system of law, not just particular statutes, enactments or decisions.

For example, Korematsu v. United States has never formally been overruled or vacated by the US Supreme Court. Under the absurd doctrine of the Federalist Society, that abominable decision, no less than the admirable Marbury v. Madison, is “law.” But under any defensible understanding of what the US legal system actually entails, Korematsu is not law, even though it has, regrettably, not yet been formally expunged from precedents of the Supreme Court. One would hope that Judge Gorsuch will be given an opportunity to opine on the legal status of Korematsu and perhaps other legal abominations which are still available to be invoked as precedent, when he testifies before the Senate Judiciary Committee.

Stuart Dreyfus on Richard Bellman, Dynamic Programming, Quants and Financial Engineering

Last week, looking for some information about the mathematician Richard Bellman who, among other feats and achievements, developed dynamic programming, I came across a film called the Bellman Equation which you can watch on the internet. It was written produced and narrated by Bellman’s grandson, Gabriel Bellman and features among others, Gabriel’s father (Bellman’s son), Gabriel’s aunt (Bellman’s daughter), Bellman’s first and second wives, and numerous friends and colleagues. You learn how brilliant, driven, arrogant, charming, and difficult Bellman was, and how he cast a shadow over the lives of his children and grandchildren. Aside from the stories about his life, his work on the atomic bomb in World War II, his meeting with Einstein when he was a young assistant professor at Princeton, his run-in  with the Julius and Ethel Rosenberg at Los Alamos, and, as a result, with Joe McCarthy. And on top of all the family history, family dynamics, and psychological theorizing, you also get an interesting little account of the intuitive logic underlying the theory of dynamic programming. You can watch it for free with commercials on snagfilms.

But I especially wanted to draw attention to the brief appearance in the video of Bellman’s colleague at Rand Corporation in the 1950s, Stuart Dreyfus, with whom Bellman collaborated in developing the theory of dynamic programming, and with whom Bellman co-wrote Applied Dynamic Programming. At 14:17 into the film, one hears the voice of Stuart Dreyfus saying just before he comes into view on the screen:

The world is full of problems where what is required of the person making eh decision is not to just face a static situation and make one single decision, but to make a sequence of decisions as the situation evolves. If you stop to think about it, almost everything in the world falls in that category. So that is the kind of situation hat dynamic programming addressed. The principle on which it is based is such an intuitively obvious principle that it drives some mathematicians crazy, because it’s really kind of impossible to prove that it’s an intuitive principle, and pure mathematicians don’t like intuition.

Then a few moments later, Dreyfus continues:

So this principle of optimality is: why would you ever make a decision now which puts you into a position one step from now where you couldn’t do as well as [if you were in] some other position? Obviously, you would never do that if you knew the value of these other positions.

And a few moments after that:

The place that [dynamic programming] is used the most upsets me greatly — and I don’t know how Dick would feel — but that’s in the so-called “quants” doing so-called “financial engineering” that designed derivatives that brought down the financial system. That’s all dynamic programming mathematics basically. I have a feeling Dick would have thought that’s immoral. The financial world doesn’t produce any useful thing. It’s just like poker; it’s just a game. You’re taking money away from other people and getting yourself things. And to encourage our graduate students to learn how to apply dynamic programming in that area, I think is a sin.

Allowing for some hyperbole on Dreyfus’s part, I think he is making an important point, a point I’ve made before in several posts about finance. A great deal of the income earned by the financial industry does not represent real output; it represents trading based on gaining information advantages over trading partners. So the more money the financial industry makes from financial engineering, the more money someone else is losing to the financial industry, because every trade has two sides.

Not all trading has this characteristic. A lot of trading involves exchanges that are mutually beneficial, and middlemen that facilitate such trading are contributing to the welfare of society by improving the allocation of goods, services and resources. But trading that takes place in order to exploit an information advantage over a counter-party, and devoting resources to the creation of the information advantages that makes such trading profitable is socially wasteful. That is the intuitive principle insightfully grasped and articulated by Dreyfus.

As I have also pointed out in previous posts (e.g., here, here and here) the principle, intuitively grasped on some level, but not properly articulated or applied by people like Thorstein Veblen, was first correctly explicated by Jack Hirshleifer, who like Bellman and Dreyfus, worked for the Rand Corporation in the 1950s and 1960s, in his classic article “The Private and Social Value of Information and the Reward to Inventive Activity.”

Further Thoughts on Bitcoins, Fiat Moneys, and Network Effects

In a couple of tweets to me and J. P. Koning, William Luther pointed out, I think correctly, that the validity of the backward-induction argument in my previous post explaining why bitcoins, or any fiat currency not made acceptable for discharging tax obligations, cannot retain a positive value requires that there be a terminal date after which bitcoins or fiat currency will no longer be accepted in exchange be known with certainty.

 

But if the terminal date is unknown, the backward-induction argument doesn’t work, because everyone (or at least a sufficient number of people) may assume that there will always be someone else willing to accept their soon-to-be worthless holdings of fiat money in exchange for something valuable. Thus, without a certain terminal date, it is not logically necessary for the value of fiat money to fall to zero immediately, even though everyone realizes that,  at some undetermined future time, its value will fall to zero.

In short, the point is that if enough people think that they will be able to unload their holdings of a fundamentally worthless asset on someone more foolish than they are, a pyramid scheme need not collapse quickly, but may operate successfully for a long time. Uncertainty about the terminal date gives people an incentive to gamble on when the moment of truth will arrive. As long as enough people are willing to take the gamble, the pyramid won’t collapse, even if those people know that it sooner or later it will collapse.

Robert Louis Stevenson described the theory quite nicely in a short story, “The Bottle Imp,” which has inspired a philosophic literature concerning the backward induction argument that is known as the “bottle imp paradox,” (further references in the linked wikipedia entry) and the related related “unexpected hanging paradox,” and the “greater fool theory.”

Although Luther’s point is well-taken, it’s not clear to me that, at least on an informal level, my argument about fiat money is without relevance. Even though a zero value for fiat money is logically necessary, a positive value is not assured. The value of fiat money is indeterminate, and the risk of a collapse of value or a hyperinflation is, would indeed be a constant risk for a pure fiat money if there were no other factors, e.g., acceptability for discharging tax liabilities, operating else to support a positive value. Even if a positive value were maintained for a time, a collapse of value could occur quite suddenly; there could well be a tipping point at which a critical mass of people expecting the value to fall to zero could overwhelm the optimism of those expecting the value to remain remain positive causing a convergence of self-fulfilling expectations of a zero value.

But this is where network effects come into the picture to play a stabilizing role. If network effects are very strong, which they certainly are for a medium of exchange in any advanced market economy, there is a powerful lock-in for most people, because almost all transactions taking place in the economy are carried out by way of a direct or indirect transfer of the medium of exchange. Recontracting in terms of an alternative medium of exchange is not only costly for each individual, but would require an unraveling of the existing infrastructure for carrying out these transactions with little chance of replacing it with a new medium-of-exchange-network infrastructure.

Once transactors have been locked in to the existing medium-of-exchange-network infrastructure, the costs of abandoning the existing medium of exchange may be prohibitive, thereby preventing a switch from the existing medium of exchange, even though people realize that there is a high probability that the medium of exchange will eventually lose its value, the costs to each individual of opting out of the medium-of-exchange network being prohibitive as would be the transactions costs of arriving at a voluntary collective shift to some new medium of exchange.

However, it is possible that small countries whose economies are highly integrated with the economies of neighboring countries, are in a better position to switch from to an alternative currency if the likelihood that the currently used medium of exchange will become worthless increases. So the chances of seeing a sudden collapse of an existing medium of exchange are greater in small open economies than in large, relatively self-contained, economies.

Based on the above reasoning suggests the following preliminary conjecture: the probability that a fiat currency that is not acceptable for discharging tax liabilities could retain a positive value would depend on two factors: a) the strength of network effects, and b) the proportion of users of the existing medium of exchange that have occasion to use an alternative medium of exchange in carrying out their routine transactions.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,377 other followers

Follow Uneasy Money on WordPress.com