Archive for the 'Alchian' Category

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 

Hayek and Rational Expectations

In this, my final, installment on Hayek and intertemporal equilibrium, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. In his discussions of intertemporal equilibrium, Roy Radner assigns a meaning to the term “rational-expectations equilibrium” very different from the meaning normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents are able to make inferences about the beliefs held by other agents when observed prices differ from what they had expected prices to be. Agents attribute the differences between observed and expected prices to information held by agents better informed than themselves, and revise their own expectations accordingly in light of the information that would have justified the observed prices.

In the early 1950s, one very rational agent, Armen Alchian, was able to figure out what chemicals were being used in making the newly developed hydrogen bomb by identifying companies whose stock prices had risen too rapidly to be explained otherwise. Alchian, who spent almost his entire career at UCLA while also moonlighting at the nearby Rand Corporation, wrote a paper for Rand in which he listed the chemicals used in making the hydrogen bomb. When people at the Defense Department heard about the paper – the Rand Corporation was started as a think tank largely funded by the Department of Defense to do research that the Defense Department was interested in – they went to Alchian, confiscated and destroyed the paper. Joseph Newhard recently wrote a paper about this episode in the Journal of Corporate Finance. Here’s the abstract:

At RAND in 1954, Armen A. Alchian conducted the world’s first event study to infer the fuel material used in the manufacturing of the newly-developed hydrogen bomb. Successfully identifying lithium as the fusion fuel using only publicly available financial data, the paper was seen as a threat to national security and was immediately confiscated and destroyed. The bomb’s construction being secret at the time but having since been partially declassified, the nuclear tests of the early 1950s provide an opportunity to observe market efficiency through the dissemination of private information as it becomes public. I replicate Alchian’s event study of capital market reactions to the Operation Castle series of nuclear detonations in the Marshall Islands, beginning with the Bravo shot on March 1, 1954 at Bikini Atoll which remains the largest nuclear detonation in US history, confirming Alchian’s results. The Operation Castle tests pioneered the use of lithium deuteride dry fuel which paved the way for the development of high yield nuclear weapons deliverable by aircraft. I find significant upward movement in the price of Lithium Corp. relative to the other corporations and to DJIA in March 1954; within three weeks of Castle Bravo the stock was up 48% before settling down to a monthly return of 28% despite secrecy, scientific uncertainty, and public confusion surrounding the test; the company saw a return of 461% for the year.

Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of the future based on commonly shared knowledge.

So rather than pursue Radner’s conception of rational expectations, I will focus here on the conventional understanding of “rational expectations” in modern macroeconomics, which is that the price expectations formed by the agents in a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is a very important property that any model ought to have. It simply says that a model ought to have the property that if one assumes that the agents in a model expect the equilibrium predicted by the model, then, given those expectations, the solution of the model will turn out to be the equilibrium of the model. This property is a consistency and coherence property that any model, regardless of its substantive predictions, ought to have. If a model lacks this property, there is something wrong with the model.

But there is a huge difference between saying that a model should have the property that correct expectations are self-fulfilling and saying that agents are in fact capable of predicting the equilibrium of the model. Assuming the former does not entail the latter. What kind of crazy model would have the property that correct expectations are not self-fulfilling? I mean think about: a model in which correct expectations are not self-fulfilling is a nonsense model.

But demanding that a model not spout out jibberish is very different from insisting that the agents in the model necessarily have the capacity to predict what the equilibrium of the model will be. Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a matter of methodological fiat. But methodological fiat is what rational expectations has become in macroeconomics.

In his 1937 paper on intertemporal equilibrium, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most accurate description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that over time expectations do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth in the early 1960s, he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a paraticular market. The motivation for Muth to introduce the idea of a rational expectation was idea of a cobweb cycle in which producers simply assume that the current price will remain at whatever level currently prevails. If there is a time lag between production, as in agricultural markets between the initial application of inputs and the final yield of output, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – a point that Hayek understood better than perhaps anyone else — is that there is a huge difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists). Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there is but one or at most two variables about which agents have to form their rational expectations.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

OMG! The Age of Trump Is upon Us

UPDATE (11/11, 10:47 am EST): Clinton’s lead in the popular vote is now about 400,000 and according to David Leonhardt of the New York Times, the lead is likely to increase to as much as 2 million votes by the time all the votes are counted.

Here’s a little thought experiment for you to ponder. Suppose that the outcome of yesterday’s election had been reversed and Hillary Clinton emerged with 270+ electoral votes but trailed Donald Trump by 200,000 popular votes. What would the world be like today? What would we be hearing from Trump and his entourage about the outcome of the election? I daresay we would be hearing about “second amendment remedies” from many of the Trumpsters. I wonder how that would have played out.

(As I write this, I am hearing news reports about rowdy demonstrations in a number of locations against Trump’s election. Insofar as these demonstrations become violent, they are certainly deplorable, but nothing we have heard from Clinton and her campaign or from leaders of the Democratic Party would provide any encouragement for violent protests against the outcome of a free election.)

But enough of fantasies about an alternative universe; in the one that we happen to inhabit, the one in which Donald Trump is going to be sworn in as President of the United States in about ten weeks, we are faced with this stark reality. The American voters, in their wisdom, have elected a mountebank (OED: “A false pretender to skill or knowledge, a charlatan: a person incurring contempt or ridicule through efforts to acquire something, esp. social distinction or glamour.”), a narcissistic sociopath, as their chief executive and head of state. The success of Trump’s demagogic campaign – a campaign repackaging the repugnant themes of such successful 20th century American demagogues as Huey Long, Father Coughlin and George Wallace (not to mention not so successful ones like the deplorable Pat Buchanan) — is now being celebrated by Trump apologists and Banana Republican sycophants as evidence of his political genius in sensing and tapping into the anger and frustrations of the forgotten white working class, as if the anger and frustration of the white working class has not been the trump card that every two-bit demagogue and would-be despot of the last 150 has tried to play. Some genius.

I recently overheard a conversation between a close friend of mine who is a Trump supporter and a non-Trump supporter. My friend is white, but is not one of the poorly educated of whom Trump is so fond, holding a Ph.D. in physics, and being well read and knowledgeable about many subjects. Although he doesn’t like Trump, he is very conservative and can’t stand Clinton, so he decided to vote for Trump without any apparent internal struggle or second thoughts. One of his reasons for favoring Trump is his opposition to Obamacare, which he blames for the very large increase in premiums he has to pay for the medical insurance he gets through his employer. When it was pointed out to him that it is unlikely that the increase in his insurance premiums was caused by Obamacare, his response was that Obamacare has added to the regulations that insurance companies must comply with, so that the cost of those regulations is ultimately borne by those buying insurance, which means that his insurance premiums must have gone up because of Obamacare.

Since I wasn’t part of the conversation, I didn’t interrupt to point out that the standard arguments about the costs of regulation being ultimately borne by consumers of the regulated product don’t necessarily apply to markets like health care in which customers don’t have good information about whether suppliers are providing them with the services that they need or are instead providing unnecessary services to enrich themselves. In such markets, third-parties (i.e., insurance companies) supposedly better informed than patients about whether the services provided to patients by their doctors are really serving the patients’ interests, and are really worth the cost of providing those services, can help protect the interests of patients. Of course, the interests of insurance companies aren’t necessarily aligned very well with the interests of their policyholders either, because insurance companies may prefer not to pay for treatments that it would be in the interests of patients to receive.

So in health markets there are doctors treating ill-informed patients whose bills are being paid by insurance companies that try to monitor doctors to make sure that doctors do not provide unnecessary services and treatments to patients. But since the interests of insurance companies may be not to pay doctors to provide services that would be beneficial to patients, who is going to protect policyholders from the insurance companies? Well, um, maybe the government should be involved. Yes, but how do we know if the government is doing a good job or bad job of looking out for the interests of patients? I don’t think that we know the answer to that question. But Obamacare, aside from making medical insurance more widely available to people who need it, is an attempt to try to make insurance companies more responsive to the interests of their policyholders. Perhaps not the smartest attempt, by any means, but given the system of health care delivery that has evolved in the United States over the past three quarters of a century, it is not obviously a step in the wrong direction.

But even if Obamacare is not working well, and I have no well thought out opinion about whether it is or isn’t, the kind of simple-minded critique that my friend was making seemed to me to be genuinely cringe-worthy. Here is a Ph.D. in physics making an argument that sounded as if it were coming straight out of the mouth of Sean Hannity. OMG! The dumbing down of America is being expertly engineered by Fox News, and, boy, are they succeeding. Geniuses, that’s what they are. Geniuses!

When I took my first economics course almost a half century ago and read the greatest economics textbook ever written, University Economics by Armen Alchian and William Allen, I was blown away by their ability to show how much sloppy and muddled thinking there was about how markets work and how controls that prevent prices from allocating resources don’t eliminate destructive or wasteful competition, but rather shift competition from relatively cheap modes like offering to pay a higher price or to accept a lower price to relatively costly forms like waiting in line or lobbying a regulator to gain access to a politically determined allocation system.

I have been a fan of free markets ever since. I oppose government intervention in the economy as a default position. But the lazy thinking that once led people to assume that government regulation is the cure for all problems now leads people to assume that government regulation is the cause of all problems. What a difference half a century makes.

Susan Woodward Remembers Armen Alchian

Susan Woodward, a former colleague and co-author of the late great Armen Alchian, was kind enough to share with me an article of hers forthcoming in a special issue of the Journal of Corporate Finance dedicated to Alchian’s memory. I thank Susan and Harold Mulherin, co-editor of the Journal of Corporate Finance for allowing me to post this wonderful tribute to Alchian.

Memories of Armen

Susan Woodward

Sand Hill Econometrics

Armen Alchian approached economics with constructive eccentricity. An aspect became apparent long ago when I taught intermediate price theory, a two-quarter course. Jack Hirshleifer’s new text (Hirshleifer (1976)) was just out and his approach was the foundation of my own training, so that was an obvious choice. But also, Alchian and Allen’s University Economics (Alchian and Allen (1964)) had been usefully separated into parts, of which Exchange and Production: Competition, Coordination, and Control (Alchian and Allen (1977)), the “price theory” part, available in paperback. I used both books.

Somewhere in the second quarter we got to the topic of rent. Rent is such a difficult topic because it’s a word in everyone’s vocabulary but to which economists give a special, second meaning. To prepare a discussion, I looked up “rent” in the index of both texts. In Hirshleifer (1976), it appeared for the first time on some page like 417. In Alchian & Allen (1977), it appeared, say, on page 99, and page 102, and page 188, and pages 87-88, 336-338, and 364-365. It was peppered all through the book.

Hirshleifer approached price theory as geometry. Lay out the axioms, prove the theorems. And never introduce a new idea, especially one like “rent” that collides with standard usage, without a solid foundation. The Alchian approach is more exploratory. “Oh, here’s an idea. Let’s walk around the idea and see what it looks like from all sides. Let’s tip it over and see what’s under it and what kind of noise it makes. Let’s light a fire under it and just see what happens. Drop it ten stories.” The books were complements, not substitutes.

While this textbook story illustrates one aspect of Armen’s thinking, the big epiphanies came working on our joint papers. Unusual for students at UCLA in that era, I didn’t have Armen as a teacher. My first year, Armen was away, and Jack Hirshleifer taught the entire first year price theory. Entranced by the finance segment of that year, the lure of finance in business school was irresistible. But fortune did not abandon me.

I came back to UCLA to teach at the dawn of personal computers. Oh they were feeble! There was a little room on the eighth floor of Bunche Hall where there were three little Compaq computers—the ones with really tiny green-on-black screens. Portable, sort of, but not like a purse. Armen and I were regulars in this word processing cave. Armen would get bored and start a conversation by asking some profound question. I’d flounder a bit and tell him I didn’t know and go back to work. But one day he asked why corporations limit liability. Whew, something to say. It is not a risk story, but about facilitating transferable shares. Limit liability, then shareholders and contracting creditors can price possible recovery, and the wealth and resources of individual shareholder are then irrelevant. When liability tries to reach beyond the firm’s assets to those of individual shareholders, shareholder wealth matters to value, and this creates reasons for inhibiting share transfers. You can limit liability and still address concern about tort creditors by having the firm carrying insurance for torts.

Armen asked “How did you figure this out?” I said, “I don’t know.” “Have you written it down?” “No, it doesn’t seem important enough, it would only be two pages.” “Oh, no, of course it is!” He was right. What I wrote at Armen’s insistence, Woodward (1985), is now in two books of readings on the modern corporation, still in print, still on reading lists, and yes it was more than two pages. The paper by Bargeron and Lehn (2015) in this volume provides empirical confirmation about the impact of limited liability on share transferability. After our conversations about limited liability, Armen never again called me “Joanne,” as in the actress, Joanne Woodward, wife of Paul Newman.

This led to many more discussions about the organization of firms. I was dismayed by the seeming mysticism of “teamwork” as discussed in the old Alchian & Demsetz paper. Does it not all boil down to moral hazard and hold-up, both aspects of information costs, and the potential for the residual claimant to manage these? Armen came to agree and that this, too, was worth writing up. So we started writing. I scribbled down my thoughts. Armen read them and said, “Well, this is right, but it will make Harold (Demsetz) mad. We can’t say it that way. We’ll say it another way.” Armen saw it as his job to bring Harold around.

As we started working on this paper (Alchian and Woodward (1987)), I asked Armen, “What journal should we be thinking of?” Armen said “Oh, don’t worry about that, something will come along”. It went to Rolf Richter’s journal because Armen admired Rolf’s efforts to promote economic analysis of institutions. There are accounts of Armen pulling accepted papers from journals in order to put them into books of readings in honor of his friends, and these stories are true. No journal impressed Armen very much. He thought that if something was good, people would find it and read it.

Soon after the first paper was circulating, Orley Ashenfelter asked Armen to write a book review of Oliver Williamson’s The Economic Institutions of Capitalism (such a brilliant title!). I got enlisted for that project too (Alchian and Woodward (1988)). Armen began writing, but I went back to reread Institutions of Capitalism. Armen gave me what he had written, and I was baffled. “Armen, this stuff isn’t in Williamson.” He asked, “Well, did he get it wrong?” I said, “No, it’s not that he got it wrong. These issues just aren’t there at all. You attribute these ideas to him, but they really come from our other paper.” And he said “Oh, well, don’t worry about that. Some historian will sort it out later. It’s a good place to promote these ideas, and they’ll get the right story eventually.” So, dear reader, now you know.

This from someone who spent his life discussing the efficiencies of private property and property rights—to basically give ideas away in order to promote them? It was a good lesson. I was just starting my ten years in the federal government. In academia, thinkers try to establish property rights in ideas. “This is mine. I thought of this. You must cite me.” In government this is not a winning strategy. Instead, you need plant an idea, then convince others that it’s their idea so they will help you.

And it was sometimes Armen’s strategy in the academic world too. Only someone who was very confident would do this. Or someone who just cared more about promoting ideas he thought were right than he cared about getting credit for them. Or someone who did not have so much respect for the refereeing process. He was so cavalier!

Armen had no use for formal models that did not teach us to look somewhere new in the known world, nor had he any patience for findings that relied on fancy econometrics. What was Armen’s idea of econometrics? Merton Miller told me. We were chatting about limited liability. Merton asked about evidence. Well, all public firms with transferable shares now have limited liability. But in private, closely-held firms, loans nearly always explicitly specify which of the owner’s personal assets are pledged against bank loans. “How do you know?” “From conversations with bankers.” Merton said said, “Ah, this sounds like UCLA econometrics! You go to Armen Alchian and you ask, ‘Armen, is this number about right?’ And Armen says, ‘Yeah, that sounds right.’ So you use that number.”

Why is Armen loved so much? It’s not just his contributions to our understanding because many great thinkers are hardly loved at all. Several things stand out. As noted above, Armen’s sense of what is important is very appealing. Ideas are important. Ideas are more important than being important. Don’t fuss over the small stuff or the small-minded stuff, just work on the ideas and get them right. Armen worked at inhibiting inefficient behavior, but never in an obvious way. He would be the first to agree that not all competition is efficient, and in particular that status competition is inefficient. Lunches and dinners with Armen never included conversations about who was getting tenure where or why various papers got in or did not get in to certain journals. He thought it just did not matter very much or deserve much attention.

Armen was intensely curious about the world and interested in things outside of himself. He was one of the least self-indulgent people that I have ever met. It cheered everybody up. Everyone was in a better mood for the often silly questions that Armen would ask about everything, such as, “Why do they use decorations in the sushi bar and not anywhere else? Is there some optimality story here?” Armen recognized his own limitations and was not afraid of them.

Armen’s views on inefficient behavior came out in an interesting way when we were working on the Williamson book review. What does the word “fair” mean? In the early 1970’s at UCLA, no one was very comfortable with “fair”. Many would even have said ‘fair’ has no meaning in economics. But then we got to pondering the car repair person in the desert (remember Los Angeles is next to a big desert), who is in a position to hold up unlucky motorists whose vehicles break down in a remote place. Why would the mechanic not hold up the motorist and charge a high price? The mechanic has the power. Information about occasional holdups would provoke inefficient avoidance of travel or taking ridiculous precautions. But from the individual perspective, why wouldn’t the mechanic engage in opportunistic behavior, on the spot? “Well,” Armen said, “probably he doesn’t do it because he was raised right.” Armen knew what “fair” meant, and was willing to take a stand on it being efficient.

For all his reputation as a conservative, Armen was very interested in Earl Thompson’s ideas about socially efficient institutions, and the useful constraints that collective action could and does impose on us. (see, for example, Thompson (1968, 1974).) He had more patience for Earl that any of Earl’s other senior colleagues except possibly Jim Buchanan. Earl could go on all evening and longer about the welfare cost of the status rat race, of militarism and how to discourage it, the brilliance of agricultural subsidies, how no one should listen to corrupt elites, and Armen would smile and nod and ponder.

Armen was a happy teacher. As others attest in this issue, he brought great energy, engagement, and generosity to the classroom. He might have been dressed for golf, but he gave students his complete attention. He especially enjoyed teaching the judges in Henry Manne’s Economics & Law program. One former pupil sought him out and at dinner, brought up the Apple v. Microsoft copyright dispute. He wanted to discuss the merits of the issues. Armen said oh no, simply get the thing over with ASAP. Armen said that he was a shareholder in both companies, and consequently did not care who won, but cared very much about what resources were squandered on the battle. Though the economics of this perspective was not novel (it was aired in Texaco v Pennzoil few years earlier), Armen provided in that conversation a view that neither side had an interest in promoting in court. The reaction was: Oh! Those who followed this case might have been puzzled at the subsequent proceeding in this dispute, but those who heard the conversation at dinner were not.

And Armen was a warm and sentimental person. When I moved to Washington, I left my roller skates in the extra bedroom where I slept when I visited Armen and Pauline. These were old-fashioned skates with two wheels in the front and two in the back, Riedell boots and kryptonite wheels, bought at Rip City Skates on Santa Monica Boulevard, (which is still there in 2015! I just looked it up, the proprietor knows all the empty swimming pools within 75 miles). I would take my skates down to the beach and skate from Santa Monica to Venice and back, then go buy some cinnamon rolls at the Pioneer bakery, and bring them back to Mar Vista and Armen and Pauline and I would eat them. Armen loved this ritual. Is she back yet? When I married Bob Hall and moved back to California, Armen did not want me to take the skates away. So I didn’t.

And here is a story Armen loved: Ron Batchelder was a student at UCLA who is also a great tennis player, a professional tennis player who had to be lured out of tennis and into economics, and who has written some fine economic history and more. He played tennis with Armen regularly for many years. On one occasion before dinner Armen said to Ron, “I played really well today.” Ron said, “Yes, you did; you played quite well today.” And Armen said, “But you know what? When I play better, you play better.” And Ron smiled and shrugged his shoulders. I said, “Ron, is it true?” He shrugged again and said, “Well, a long time ago, I learned to play the customer’s game.” And of course Armen just loved that line. He re-told that story many times.

Armen’s enthusiasm for that story is a reflection of his enthusiasm for life. It was a rare enthusiasm, an extraordinary enthusiasm. We all give him credit for it and we should, because it was an act of choice; it was an act of will, a gift to us all. Armen would have never said so, though, because he was raised right.

References

Alchian, Armen A., William R. Allen, 1964. University Economics. Wadsworth Publishing Company, Belmont, CA.

Alchian, Armen A., William R. Allen, 1977 Exchange and Production: Competition, Coordination, and Control. Wadsworth Publishing Company, Belmont, CA., 2nd edition.

Alchian, Armen A., Woodward, Susan, 1987. “Reflections on the theory of the firm.” Journal of Institutional and Theoretical Economics. 143, 110-136.

Alchian, Armen A., Woodward, Susan, 1988. “The firm is dead: Long live the firm: A review of Oliver E. Williamson’s The Economic Institutions of Capitalism.” Journal of Economic Literature. 26, 65-79.

Bargeron, Leonce, Lehn, Kenneth, 2015. “Limited liability and share transferability: An analysis of California firms, 1920-1940.” Journal of Corporate Finance, this volume.

Hirshleifer, Jack, 1976. Price Theory and Applications. Prentice hall, Englewood Cliffs, NJ.

Thompson, Earl A., 1968. “The perfectly competitive production of public goods.” Review of Economics and Statistics. 50, 1-12.

Thompson, Earl A., 1974. “Taxation and national defense.” Journal of Political Economy. 82, 755-782.

Woodward, Susan E., 1985. “Limited liability in the theory of the firm.” Journal of Institutional and Theorectical Economics. 141, 601-611.

Microfoundations (aka Macroeconomic Reductionism) Redux

In two recent blog posts (here and here), Simon Wren-Lewis wrote sensibly about microfoundations. Though triggered by Wren-Lewis’s posts, the following comments are not intended as criticisms of him, though I think he does give microfoundations (as they are now understood) too much credit. Rather, my criticism is aimed at the way microfoundations have come to be used to restrict the kind of macroeconomic explanations and models that are up for consideration among working macroeconomists. I have written about microfoundations before on this blog (here and here)  and some, if not most, of what I am going to say may be repetitive, but obviously the misconceptions associated with what Wren-Lewis calls the “microfoundations project” are not going to be dispelled by a couple of blog posts, so a little repetitiveness may not be such a bad thing. Jim Buchanan liked to quote the following passage from Herbert Spencer’s Data of Ethics:

Hence an amount of repetition which to some will probably appear tedious. I do not, however, much regret this almost unavoidable result; for only by varied iteration can alien conceptions be forced on reluctant minds.

When the idea of providing microfoundations for macroeconomics started to catch on in the late 1960s – and probably nowhere did they catch on sooner or with more enthusiasm than at UCLA – the idea resonated, because macroeconomics, which then mainly consisted of various versions of the Keynesian model, seemed to embody certain presumptions about how markets work that contradicted the presumptions of microeconomics about how markets work. In microeconomics, the primary mechanism for achieving equilibrium is the price (actually the relative price) of whatever good is being analyzed. A full (or general) microeconomic equilibrium involves a set of prices such that each of markets (whether for final outputs or for inputs into the productive process) are in equilibrium, equilibrium meaning that every agent is able to purchase or sell as much of any output or input as desired at the equilibrium price. The set of equilibrium prices not only achieves equilibrium, the equilibrium, under some conditions, has optimal properties, because each agent, in choosing how much to buy or sell of each output or input, is presumed to be acting in a way that is optimal given the preferences of the agent and the social constraints under which the agent operates. Those optimal properties don’t always follow from microeconomic presumptions, optimality being dependent on the particular assumptions (about preferences, production and exchange technology, and property rights) adopted by the analyst in modeling an individual market or an entire system of markets.

The problem with Keynesian macroeconomics was that it seemed to overlook, or ignore, or dismiss, or deny, the possibility that a price mechanism is operating — or could operate — to achieve equilibrium in the markets for goods and for labor services. In other words, the Keynesian model seemed to be saying that a macoreconomic equilibrium is compatible with the absence of market clearing, notwithstanding that the absence of market clearing had always been viewed as the defining characteristic of disequilibrium. Thus, from the perspective of microeconomic theory, if there is an excess supply of workers offering labor services, i.e., there are unemployed workers who would be willing to be employed at the same wage that currently employed workers are receiving, there ought to be market forces that would reduce wages to a level such that all workers willing to work at that wage could gain employment. Keynes, of course, had attempted to explain why workers could only reduce their nominal wages, not their real wages, and argued that nominal wage cuts would simply induce equivalent price reductions, leaving real wages and employment unchanged. The microeconomic reasoning on which that argument was based hinged on Keynes’s assumption that nominal wage cuts would trigger proportionate price cuts, but that assumption was not exactly convincing, if only because the percentage price cut would seem to depend not just on the percentage reduction in the nominal wage, but also on the labor intensity of the product, Keynes, habitually and inconsistently, arguing as if labor were the only factor of production while at the same time invoking the principle of diminishing marginal productivity.

At UCLA, the point of finding microfoundations was not to create a macroeconomics that would simply reflect the results and optimal properties of a full general equilibrium model. Indeed, what made UCLA approach to microeconomics distinctive was that it aimed at deriving testable implications from relaxing the usual informational and institutional assumptions (full information, zero transactions costs, fully defined and enforceable property rights) underlying conventional microeconomic theory. If the way forward in microeconomics was to move away from the extreme assumptions underlying the perfectly competitive model, then it seemed plausible that relaxing those assumptions would be fruitful in macroeconomics as well. That led Armen Alchian and others at UCLA to think of unemployment as largely a search phenomenon. For a while that approach seemed promising, and to some extent the promise was fulfilled, but many implications of a purely search-theoretic approach to unemployment don’t seem to be that well supported empirically. For example, search models suggest that in recessions, quits increase, and that workers become more likely to refuse offers of employment after the downturn than before. Neither of those implications seems to be true. A search model would suggest that workers are unemployed because they are refusing offers below their reservation wage, but in fact most workers are becoming unemployed because they are being laid off, and in recessions workers seem likely to accept offers of employment at the same wage that other workers are getting. Now it is possible to reinterpret workers’ behavior in recessions in a way that corresponds to the search-theoretic model, but the reinterpretation seems a bit of a stretch.

Even though he was an early exponent of the search theory of unemployment, Alchian greatly admired and frequently cited a 1974 paper by Donald Gordon “A Neoclassical Theory of Keynesian Unemployment,” which proposed an implicit-contract theory of employer-employee relationship. The idea was that workers make long-term commitments to their employers, and realizing their vulnerability, after having committed themselves to their employer, to exploitation by a unilateral wage cut imposed by the employer under threat of termination, expect some assurance from their employer that they will not be subjected to a unilateral demand to accept a wage cut. Such implicit understandings make it very difficult for employers, facing a reduction in demand, to force workers to accept a wage cut, because doing so would make it hard for the employer to retain those workers that are most highly valued and to attract new workers.

Gordon’s theory of implicit wage contracts has a certain similarity to Dennis Carlton’s explanation of why many suppliers don’t immediately raise prices to their steady customers. Like Gordon, Carlton posits the existence of implicit and sometimes explicit contracts in which customers commit to purchase minimum quantities or to purchase their “requirements” from a particular supplier. In return for the assurance of having a regular customer on whom the supplier can count, the supplier gives the customer assurance that he will receive his customary supply at the agreed upon price even if market conditions should change. Rather than raise the price in the event of a shortage, the supplier may feel that he is obligated to continue supplying his regular customers at the customary price, while raising the price to new or occasional customers to “market-clearing” levels. For certain kinds of supply relationships in which customer and supplier expect to continue transacting regularly over a long period of time, price is not the sole method by which allocation decisions are made.

Klein, Crawford and Alchian discussed a similar idea in their 1978 article about vertical integration as a means of avoiding or mitigating the threat of holdup when a supplier and a customer must invest in some sunk asset, e.g., a pipeline connection, for the supply relationship to be possible. The sunk investment implies that either party, under the right circumstances, could threaten to holdup the other party by threatening to withdraw from the relationship leaving the other party stuck with a useless fixed asset. Vertical integration avoids the problem by aligning the incentives of the two parties, eliminating the potential for holdup. Price rigidity can thus be viewed as a milder form of vertical integration in cases where transactors have a relatively long-term relationship and want to assure each other that they will not be taken advantage of after making a commitment (i.e., foregoing other trading opportunities) to the other party.

The search model is fairly easy to incorporate into a standard framework because search can be treated as a form of self-employment that is an alternative to accepting employment. The shape and position of the individual’s supply curve reflects his expectations about future wage offers that he will receive if he chooses not to accept employment in the current period. The more optimistic the worker’s expectation of future wages, the higher the worker’s reservation wage in the current period. The more certain the worker feels about the expected future wage, the more elastic is his supply curve in the neighborhood of the expected wage. Thus, despite its empirical shortcomings, the search model could serve as a convenient heuristic device for modeling cyclical increases in unemployment because of the unwillingness of workers to accept nominal wage cuts. From a macroeconomic modeling perspective, the incorrect or incomplete representation of the reason for the unwillingness of workers to accept wage cuts may be less important than the overall implication of the model, which is that unanticipated aggregate demand shocks can have significant and persistent effects on real output and employment. For example in his reformulation of macroeconomic theory, Earl Thompson, though he was certainly aware of Donald Gordon’s paper, relied exclusively on a search-theoretic rationale for Keynesian unemployment, and I don’t know (or can’t remember) if he had a specific objection to Gordon’s model or simply preferred to use the search-theoretic approach for pragmatic modeling reasons.

At any rate, these comments about the role of search models in modeling unemployment decisions are meant to illustrate why microfoundations could be useful for macroeconomics: by adding to the empirical content of macromodels, providing insight into the decisions or circumstances that lead workers to accept or reject employment in the aftermath of aggregate demand shocks, or why employers impose layoffs on workers rather than offer employment at reduced wages. The spectrum of such microeconomic theories of employer-employee relationships have provided us with a richer understanding of what the term “sticky wages” might actually be referring to, beyond the existence of minimum wage laws or collective bargaining contracts specifying nominal wages over a period of time for all covered employees.

In this context microfoundations meant providing a more theoretically satisfying, more micreconomically grounded explanation for a phenomenon – “sticky wages” – that seemed somehow crucial for generating the results of the Keynesian model. I don’t think that anyone would question that microfoundations in this narrow sense has been an important and useful area of research. And it is not microfoundations in this sense that is controversial. The sense in which microfoundations is controversial is whether a macroeconomic model must show that aggregate quantities that it generates can be shown to consistent with the optimizing choices of all agents in the model. In other words, the equilibrium solution of a macroeconomic model must be such that all agents are optimizing intertemporally, subject to whatever informational imperfections are specified by the model. If the model is not derived from or consistent with the solution to such an intertemporal optimization problem, the macromodel is now considered inadequate and unworthy of consideration. Here’s how Michael Woodford, a superb economist, but very much part of the stifling microfoundations consensus that has overtaken macroeconomics, put in his paper “The Convergence in Macroeconomics: Elements of the New Synthesis.”

But it is now accepted that one should know how to render one’s growth model and one’s business-cycle model consistent with one another in principle, on those occasions when it is necessary to make such connections. Similarly, microeconomic and macroeconomic analysis are no longer considered to involve fundamentally different principles, so that it should be possible to reconcile one’s views about household or firm behavior, or one’s view of the functioning of individual markets, with one’s model of the aggregate economy, when one needs to do so.

In this respect, the methodological stance of the New Classical school and the real business cycle theorists has become the mainstream. But this does not mean that the Keynesian goal of structural modeling of short-run aggregate dynamics has been abandoned. Instead, it is now understood how one can construct and analyze dynamic general-equilibrium models that incorporate a variety of types of adjustment frictions, that allow these models to provide fairly realistic representations of both shorter-run and longer-run responses to economic disturbances. In important respects, such models remain direct descendants of the Keynesian macroeconometric models of the early postwar period, though an important part of their DNA comes from neoclassical growth models as well.

Woodford argues that by incorporating various imperfections into their general equilibrium models, e.g.., imperfectly competitive output and labor markets, lags in the adjustment of wages and prices to changes in market conditions, search and matching frictions, it is possible to reconcile the existence of underutilized resources with intertemporal optimization by agents.

The insistence of monetarists, New Classicals, and early real business cycle theorists on the empirical relevance of models of perfect competitive equilibrium — a source of much controversy in past decades — is not what has now come to be generally accepted. Instead, what is important is having general-equilibrium models in the broad sense of requiring that all equations of the model be derived from mutually consistent foundations, and that the specified behavior of each economic unit make sense given the environment created by the behavior of the others. At one time, Walrasian competitive equilibrium models were the only kind of models with these features that were well understood; but this is no longer the case.

Woodford shows no recognition of the possibility of multiple equilibria, or that the evolution of an economic system and time-series data may be path-dependent, making the long-run neutrality propositions characterizing most DSGE models untenable. If the world – the data generating mechanism – is not like the world assumed by modern macroeconomics, the estimates derived from econometric models reflecting the worldview of modern macroeconomics will be inferior to estimates derived from an econometric model reflecting another, more accurate, world view. For example, if there are many possible equilibria depending on changes in expectational parameters or on the accidental deviations from an equilibrium time path, the idea of intertemporal optimization may not even be meaningful. Rather than optimize, agents may simply follow certain simple rules of thumb. But, on methodological principle, modern macroeconomics treats the estimates generated by any alternative econometric model insufficiently grounded in the microeconomic principles of intertemporal optimization as illegitimate.

Even worse from the perspective of microfoundations are the implications of something called the Sonnenchein-Mantel-Debreu Theorem, which, as I imperfectly understand it, says something like the following. Even granting the usual assumptions of the standard general equilibrium model — continuous individual demand and supply functions, homogeneity of degree zero in prices, Walras’s Law, and suitable boundary conditions on demand and supply functions, there is no guarantee that there is a unique stable equilibrium for such an economy. Thus, even apart from the dependence of equilibrium on expectations, there is no rationally expected equilibrium because there is no unique equilibrium to serve as an attractor for expectations. Thus, as I have pointed out before, as much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Now let us compare the methodological demand for microfoundations for macroeconomics, which I would describe as a kind of macroeconomic methodological reductionism, with the reductionism of Newtonian physics. Newtonian physics reduced the Keplerian laws of planetary motion to more fundamental principles of gravitation governing the motion of all bodies celestial and terrestrial. In so doing, Newtonian physics achieved an astounding increase in explanatory power and empirical scope. What has the methodological reductionism of modern macroeconomics achieved? Reductionsim was not the source, but the result, of scientific progress. But as Carlaw and Lipsey demonstrated recently in an important paper, methodological reductionism in macroeconomics has resulted in a clear retrogression in empirical and explanatory power. Thus, methodological reductionism in macroeconomics is an antiscientific exercise in methodological authoritarianism.

Remembering Armen Alchian

On March 23, a memorial service celebrating the life of Armen Alchian was held at UCLA. David Henderson was there and shared some vignettes from the service.

Here is a webpage with pictures from the memorial.

Here is a tribute to Alchian by one of his finest students, Stephen N. S. Cheung. I found this passage especially moving, but follow the link and read the entire eulogy by Cheung.

Back in the old days at UCLA, it was not easy for graduate students to discuss research ideas with Alchian in person.  Most students harbored the impression that he was aloof and not very approachable.  I shared the same view initially, but discovered the contrary later.  The following is a true story.

In early 1967, after finishing the first lengthy chapter of my thesis, I received news from Hong Kong that my elder brother (who was a year older) had passed away.  Understanding that my mother must be shattered by the death of her favorite son, I thought about giving up at UCLA and returning to Hong Kong to be near her.  At that time I was already an assistant professor at the California State University at Long Beach.  I drove back to LA to tell Jack Hirshleifer the sad news and my intention to quit.  Hirshleifer thought that it would be a pity to abandon my dissertation, on which I had already made very good progress.  He then said he would discuss with other members of my thesis committee the possibility of granting me a PhD on the strength of the first long chapter alone.

That afternoon I went to see Alchian, planning to tell him what I told Hirshleifer.  Alchian obviously knew what I had in mind.  But before I had a chance to say anything, he said, “Don’t tell me anything about your personal matters.”  So I left without a word.  One day later in Long Beach, I received a letter from Alchian with a $500 check enclosed and simply two lines: “You can buy candies with this $500 or you can hire a typist to help you finish your dissertation as quickly as possible.”  This $500 was equivalent to my one month’s gross salary, so it was not a small amount.  What other alternatives did I have?  In less than two months I wrapped up my dissertation.  Alchian said it was a miracle.  In retrospect, I regret cashing that check and spending that $500.  If I had kept the check, I could now show it to my children, grandchildren, and students while telling them this proud story.  I know Armen would say, “Steve, put that check up for auction and see how much it would fare now.”

Here is another eulogy from the same website, and another eulogy from that website — in Chinese!

Here are remembrances from some of Alchian’s UCLA colleagues, including David Levine, John Riley and Harold Demsetz.

Here is the obituary about Alchian from the Los Angeles Times.

And finally (for now), here is a video clip of Alchian speaking about property rights.

Armen Alchian, The Economists’ Economist

The first time that I ever heard of Armen Alchian was when I took introductory economics at UCLA as a freshman, and his book (co-authored with his colleague William R. Allen who was probably responsible for the macro and international chapters) University Economics (the greatest economics textbook ever written) was the required text. I had only just started to get interested in economics, and was still more interested in political philosophy than in economics, but I found myself captivated by what I was reading in Alchian’s textbook, even though I didn’t find the professor teaching the course very exciting. And after 10 weeks (the University of California had switched to a quarter system) of introductory micro, I changed my major to economics. So there is no doubt that I became an economist because the textbook that I was taught from was written by Alchian.

In my four years as an undergraduate at UCLA, I took three classes from Axel Leijonhufvud, two from Ben Klein, two from Bill Allen, and one each from Robert Rooney, Nicos Devletoglou, James Buchanan, Jack Hirshleifer, George Murphy, and Jean Balbach. But Alchian, who in those days was not teaching undergrads, was a looming presence. It became obvious that Alchian was the central figure in the department, the leader and the role model that everyone else looked up to. I would see him occasionally on campus, but was too shy or too much in awe of him to introduce myself to him. One incident that I particularly recall is when, in my junior year, F. A. Hayek visited UCLA in the fall and winter quarters (in the department of philosophy!) teaching an undergraduate course in the philosophy of the social sciences and a graduate seminar on the first draft of Law, Legislation and Liberty. I took Hayek’s course on the philosophy of the social sciences, and audited his graduate seminar, and I occasionally used to visit his office to ask him some questions. I once asked his advice about which graduate programs he would suggest that I apply to. He mentioned two schools, Chicago, of course, and Princeton where his friends Fritz Machlup and Jacob Viner were still teaching, before asking, “but why would you think of going to graduate school anywhere else than UCLA? You will get the best training in economics in the world from Alchian, Hirshleifer and Leijonhufvud.” And so it was, I applied to, and was accepted at, Chicago, but stayed at UCLA.

As a first year graduate student, I took the (three-quarter) microeconomics sequence from Jack Hirshleifer (who in the scholarly hierarachy at UCLA ranked only slightly below Alchian) and the two-quarter macroeconomics sequence from Leijonhufvud. Hirshleifer taught a great course. He was totally prepared, very organized and his lectures were always clear and easy to follow. To do well, you had to sit back listen, review the lecture notes, read through the reading assignments, and do the homework problems. For me at least, with the benefit of four years of UCLA undergraduate training, it was a breeze.

Great as Hirshleifer was as a teacher, I still felt that I was missing out by not having been taught by Alchian. Perhaps Alchian felt that the students who took the microeconomics sequence from Hirshleifer should get some training from him as well, so the next year he taught a graduate seminar in topics in price theory, to give us an opportunity to learn from him how to do economics. You could also see how Alchian operated if you went to a workshop or lecture by a visiting scholar, when Alchian would start to ask questions. He would smile, put his head on his forehead, and say something like, “I just don’t understand that,” and force whoever it was to try to explain the logic by which he had arrived at some conclusion. And Alchian would just keep smiling, explain what the problem was with the answer he got, and ask more questions. Alchian didn’t shout or rant or rave, but if Alchian was questioning you, you were not in a very comfortable position.

So I was more than a bit apprehensive going into Alchian’s seminar. There were all kinds of stories told by graduate students about how tough Alchian could be on his students if they weren’t able to respond adequately when subjected to his questioning in the Socratic style. But the seminar could not have been more enjoyable. There was give and take, but I don’t remember seeing any blood spilled. Perhaps by the time I got to his seminar, Alchian, then about 57, had mellowed a bit, or, maybe, because we had all gone through the graduate microeconomics sequence, he felt that we didn’t require such an intense learning environment. At any rate, the seminar, which met twice a week for an hour and a quarter for 10 weeks, usually involved Alchian picking a story from the newspaper and asking us how to analyze the economics underlying the story. Armed with nothing but a chalkboard and piece of chalk, Alchian would lead us relatively painlessly from confusion to clarity, from obscurity to enlightenment. The key concepts with which to approach any problem were to understand the choices available to those involved, to define the relevant costs, and to understand the constraints under which choices are made, the constraints being determined largely by the delimitation of the property rights under which the resources can be used or exchanged, or, to be more precise, the property rights to use those resources can be exchanged.

Ultimately, the lesson that I learned from Alchian is that, at its best, economic theory is a tool for solving actual real problems, and the nature of the problem ought to dictate the way in which the theory (verbal, numerical, graphical, higher mathematical) is deployed, not the other way around. The goal is not to reach any particular conclusion, but to apply the tools in the best and most authentic way that they can be applied. Alchian did not wear his politics on his sleeve, though it wasn’t too hard to figure out that he was politically conservative with libertarian tendencies. But you never got the feeling that his politics dictated his economic analysis. In many respects, Alchian’s closest disciple was Earl Thompson, who studied under Alchian as an undergraduate, and then, after playing minor-league baseball for a couple of years, going to Harvard for graduate school, eventually coming back to UCLA as an assistant professor where he remained for his entire career. Earl, discarding his youthful libertarianism early on, developed many completely original, often eccentric, theories about the optimality of all kinds of government interventions – even protectionism – opposed by most economists, but Alchian took them all in stride. Mere policy disagreements never affected their close personal bond, and Alchian wrote the forward to Earl’s book with Charles Hickson, Ideology and the Evolution of Vital Economics Institutions. If Alchian was friendly with and an admirer of Milton Friedman, he just as friendly with, and just as admiring of, Paul Samuelson and Kenneth Arrow, with whom he collaborated on several projects in the 1950s when they consulted for the Rand Corporation. Alchian cared less about the policy conclusion than he did about the quality of the underlying economic analysis.

As I have pointed out on several prior occasions, it is simply scandalous that Alchian was not awarded the Noble Prize. His published output was not as voluminous as that of some other luminaries, but there is a remarkably high proportion of classics among his publications. So many important ideas came from him, especially thinking about economic competition as an evolutionary process, the distinction between the functional relationship between cost and volume of output and cost and rate of output, the effect of incomplete information on economic action, the economics of property rights, the effects of inflation on economic activity. (Two volumes of his Collected Works, a must for anyone really serious about economics, contain a number of previously unpublished or hard to find papers, and are available here.) Perhaps in the future I will discuss some of my favorites among his articles.

Although Alchian did not win the Nobel Prize, in 1990 the Nobel Prize was awarded to Harry Markowitz, Merton Miller, and William F. Sharpe for their work on financial economics. Sharp, went to UCLA, writing his Ph.D. dissertation on securities prices under Alchian, and worked at the Rand Corporation in the 1950s and 1960s with Markowitz.  Here’s what Sharpe wrote about Alchian:

Armen Alchian, a professor of economics, was my role model at UCLA. He taught his students to question everything; to always begin an analysis with first principles; to concentrate on essential elements and abstract from secondary ones; and to play devil’s advocate with one’s own ideas. In his classes we were able to watch a first-rate mind work on a host of fascinating problems. I have attempted to emulate his approach to research ever since.

And if you go to the Amazon page for University Economics and look at the comments you will see a comment from none other than Harry Markowitz:

I am about to order this book. I have just read its quite favorable reviews, and I am not a bit surprised at their being impressed by Armen Alchian’s writings. I was a colleague of Armen’s, at the Rand Corporation “think tank,” during the 1950s, and hold no economist in higher regard. When I sat down at my keyboard just now it was to find out what happened to Armen’s works. One Google response was someone saying that Armen should get a Nobel Prize. I concur. My own Nobel Prize in Economics was awarded in 1990 along with the prize for Wm. Sharpe. I see in Wikipedia that Armen “influenced” Bill, and that Armen is still alive and is 96 years old. I’ll see if I can contact him, but first I’ll buy this book.

I will always remember Alchian’s air of amused, philosophical detachment, occasionally bemused (though, perhaps only apparently so, as he tried to guide his students and colleagues with question to figure out a point that he already grasped), always curious, always eager for the intellectual challenge of discovery and problem solving. Has there ever been a greater teacher of economics than Alchian? Perhaps, but I don’t know who. I close with one more quotation, this one from Axel Leijonhufvud written about Alchian 25 years ago.  It still rings true.

[Alchian’s] unique brand of price theory is what gave UCLA Economics its own intellectual profile and achieved for us international recognition as an independent school of some importance—as a group of scholars who did not always take their leads from MIT, Chicago or wherever. When I came here (in 1964) the Department had Armen’s intellectual stamp on it (and he remained the obvious leader until just a couple of years ago ….). Even people outside Armen’s fields, like myself, learned to do Armen’s brand of economic analysis and a strong esprit de corps among both faculty and graduate students sprang from the consciousness that this ‘New Institutional Economics’ was one of the waves of the future and that we, at UCLA, were surfing it way ahead of the rest. But Armen’s true importance to the UCLA school did not stem just from the new ideas he taught or the outwardly recognized “brandname” that he created for us. For many of his young colleagues he embodied qualities of mind and character that seemed the more important to seek to emulate the more closely you got to know him.

My Paper (co-authored with Paul Zimmerman) on Hayek and Sraffa

I have just uploaded to the SSRN website a new draft of the paper (co-authored with Paul Zimmerman) on Hayek and Sraffa and the natural rate of interest, presented last June at the History of Economics Society conference at Brock University. The paper evolved from an early post on this blog in September 2011. I also wrote about the Hayek-Sraffa controversy in a post in June 2012 just after the HES conference.

One interesting wrinkle that occurred to me just as I was making revisions in the paper this week is that Keynes’s treatment of own rates in chapter 17 of the General Theory, which was in an important sense inspired by Sraffa, but, in my view, came to a very different conclusion from Sraffa’s, was actually nothing more than a generalization of Irving Fisher’s analysis of the real and nominal rates of interest, first presented in Fisher’s 1896 book Appreciation and Interest. In his Tract on Monetary Reform, Keynes extended Fisher’s analysis into his theory of covered interest rate arbitrage. What is really surprising is that, despite his reliance on Fisher’s analysis in the Tract and also in the Treatise on Money, Keynes sharply criticized Fisher’s analysis of the nominal and real rates of interest in chapter 13 of the General Theory. (I discussed that difficult passage in the General Theory in this post).  That is certainly surprising. But what is astonishing to me is that, after trashing Fisher in chapter 13 of the GT, Keynes goes back to Fisher in chapter 17, giving a generalized restatement of Fisher’s analysis in his discussion of own rates. Am I the first person to have noticed Keynes’s schizophrenic treatment of Fisher in the General Theory?

PS: My revered teacher, the great Armen Alchian passed away yesterday at the age of 98. There have been many tributes to him, such as this one by David Henderson, also a student of Alchian’s, in the Wall Street Journal. I have written about Alchian in the past (here, here, here, here, and here), and I hope to write about Alchian again in the near future. There was none like him; he will be missed terribly.

W. H. Hutt on Say’s Law and the Keynesian Multiplier

In a post a few months ago, I referred to W. H. Hutt as an “unjustly underrated” and “all but forgotten economist” and “as an admirable human being,” who wrote an important book in 1939, The Theory of Idle Resources, seeking to counter Keynes’s theory of involuntary unemployment. In responding to a comment on a more recent post, I pointed out that Armen Alchian relied on one of Hutt’s explanations for unemployment to provide a microeconomic basis for Keynes’s rather convoluted definition of involuntary unemployment, so that Hutt unintentionally provided support for the very Keynesian theory that he was tried to disprove. In this post, I want to explore Hutt’s very important and valuable book ARehabilitation of Say’s Law, even though, following Alchian, I would interpret what Hutt wrote in a way that is at least potentially supportive of Keynes, while also showing that Hutt’s understanding of Say’s Law allows us to view Says Law and the Keynesian multiplier as two (almost?) identical ways of describing the same phenomenon.

But before I discuss Hutt’s understanding of Say’s Law, a few words about why I think Hutt was an admirable human being are in order. Born in 1899 into a working class English family (his father was a printer), Hutt attended the London School of Economics in the early 1920s, coming under the influence of Edwin Cannan, whose writings Hutt often referred to. After gaining his bachelor’s degree, Hutt, though working full-time, continued taking courses at LSE, even publishing several articles before taking a position at the University of Capetown in 1930, despite having no advanced degree in economics. Hutt remained in South Africa until the late 1960s or early 1970s, becoming an outspoken critic of legal discrimination against non-whites and later of the apartheid regime instituted in 1948. In his book, The Economics of the Colour Bar, Hutt traced the racial policies of the South African regime not just to white racism, but to the interest of white labor unions in excluding competition by non-whites. Hutt’s hostility to labor unions for their exclusionary and protectionist policies was evident in much of his work, beginning at least with his Theory of Collective Bargaining, his Strike-Threat System, and his many critiques of Keynesian economics. However, he was opposed not to labor unions as such, just to the legal recognition of the right of some workers to coerce others into a collusive agreement to withhold their services unless their joint demand for a stipulated money wage was acceded to by employers, a right that in most other contexts would be both legally and morally unacceptable. Whether or not Hutt took his moral opposition to collective bargaining to extremes, he certainly was not motivated by any venal motives. Certainly his public opposition to apartheid, inviting retribution by the South African regime, was totally disinterested, and his opposition to collective bargaining was no less sincere, even If less widely admired, than his opposition to apartheid, and no more motivated by any expectation of personal gain.

In the General Theory, launching an attack on what he carelessly called “classical economics,” Keynes devoted special attention to the doctrine he described as Say’s Law, a doctrine that had been extensively and inconclusively debated in the nineteenth century after Say formulated what he had called the Law of the Markets in his Treatise on Political Economy in 1803. The exact meaning of the Law of the Markets was never entirely clear, so that, in arguing about Say’s Law, one can never be quite sure that one knows what one is talking about. At any rate, Keynes paraphrased Say’s Law in the following way: supply creates its own demand. In other words, “if you make it, they will buy it, or at least buy something else, because the capacity to demand is derived from the capacity to supply.”

Here is Keynes at p. 18 of the General Theory:

From the time of Say and Ricardo the classical economists have taught that supply creates its own demand; — meaning by this in some significant, but not clearly defined, sense that the whole of the costs of production must necessarily be spent in the aggregate, directly or indirectly, on purchasing the product.

In J. S. Mill’s Principles of Political Economy the doctrine is expressly set forth:

What constitutes the means of payment for commodities. Each person’s means of paying for the productions of other people consist of those which he himself possesses. All sellers are inevitably, and by the meaning of the word, buyers. Could we suddenly double the productive powers of the country, we should double the supply of commodities in every market; but we should, by the same stroke, double the purchasing power. Everybody would bring a double demand as well as supply; everybody would be able to buy twice as much, because every one would have twice as much to offer in exchange.

Then, again at p. 26, Keynes restates Say’s Law in his own terminology:

In the previous chapter we have given a definition of full employment in terms of the behavior of labour. An alternative, though equivalent, criterion is that at which we have now arrived, namely, a situation in which aggregate employment is inelastic in response to an increase in effective demand for its output. Thus Say’s Law, that the aggregate demand price of output as a whole is equal ot its aggregate supply price for all volumes of output [“could we suddenly double the productive powers of the country . . . we should . . . double the purchasing power”], is equivalent the proposition that there is no obstacle to full employment. If, however, this is not the true law relating the aggregate demand and supply functions, there is a vitally important chapter of economic theory which remains to be written and without which all discussions concerning the volume of aggregate employment are futile.

Keynes restated the same point in terms of his doctrine that macroeconomic equilibrium, the condition for which being that savings equal investment, could occur at a level of output and income corresponding to less than full employment. How could this happen? Keynes believed that if the amount that households desired to save at the full employment level of income were greater than the amount that businesses would invest at that income level, expenditure and income would decline until desired (and actual) savings equaled investment. If Say’s Law held, then whatever households chose not to spend would get transformed into investment by business, but Keynes denied that there was any mechanism by which this transformation would occur. Keynes proposed his theory of liquidity preference to explain why savings by households would not necessarily find their way into increased investment by businesses, liquidity preference preventing the rate of interest from adjusting to induce as much investment as required to generate the full-employment level of output and income.

Now the challenge for Keynes was to explain why, if there is less than full employment, wages would not fall to induce businesses to hire the unemployed workers. From Keynes’s point of view it wasn’t enough to assert that wages are sticky, because a classical believer in Say’s Law could have given that answer just as well.  If you prevent prices from adjusting, the result will be a disequilibrium.  From Keynes’s standpoint, positing price or wage inflexibility was not an acceptable explanation for unemployment.  So Keynes had to argue that, even if wages were perfectly flexible, falling wages would not induce an increase in employment. That was the point of Keynes’s definition of involuntary unemployment as a situation in which an increased price level, but not a fall in money wages, would increase employment. It was in chapter 19 of the General Theory that Keynes provided his explanation for why falling money wages would not induce an increase in output and employment.

Hutt’s insight was to interpret Say’s Law differently from the way in which most previous writers, including Keynes, had interpreted it, by focusing on “supply failures” rather than “demand failures” as the cause of total output and income falling short of the full-employment level. Every failure of supply, in other words every failure to achieve market equilibrium, means that the total effective supply in that market is less than it would have been had the market cleared. So a failure of supply (a failure to reach the maximum output of a particular product or service, given the outputs of all other products and services) implies a restriction of demand, because all the factors engaged in producing the product whose effective supply is less than its market-clearing level are generating less demand for other products than if they were producing the market-clearing level of output for that product. Similarly, if workers don’t accept employment at market-clearing wages, their failure to supply involves a failure to demand other products. Thus, failures to supply can be cumulative, because any failure of supply induces corresponding failures of demand, which, unless there are further pricing adjustments to clear other affected markets, trigger further failures of demand. And clearly the price adjustments required to clear any given market will be greater when other markets are not clearing than when those other markets are clearing.

So, with this interpretation, Hutt was able to deploy Say’s Law in a way that sheds important light on the cumulative processes of contraction and expansion characterizing business-cycle downturns and recoveries. In his modesty, Hutt disclaimed originality in using Say’s Law as a key to understanding those cumulative processes, citing various isolated statements by older economists (in particular a remark of the Cambridge economist Frederick Lavington in his 1921 book The Trade Cycle: “The inactivity of all is the cause of the inactivity of each”) that vaguely suggest, but don’t spell out, the process that Hutt describes in meticulous detail. If Hutt’s analysis was anticipated in any important way, it was by Clower and Leijonhufvud in their paper “Say’s Principle, What it Means and Doesn’t Mean,” (reprinted here and here), which introduced a somewhat artificial distinction between Say’s Law, as Keynes conceived of it, and Say’s Principle, which is closer to how Hutt thought about it.  But to Clower and Leijonhufvud, Say’s Principle was an essential part of the explanation of the Keynesian multiplier.  The connection between them is simple, effective supply is identical to effective demand because every purchase is also a sale.  A cumulative process can be viewed as either a supply-side process (Say’s Law) or a demand-side process (the Keynesian multiplier), but they are really just two sides of the same coin.

So if you have followed me this far, you may be asking yourself, did Hutt really rehabilitate Say’s Law, as he claimed to have done? And if so, did he refute Keynes, as he also claimed to have done? My answer to the first question is a qualified yes. And my answer to the second question is a qualified no. I will not try to justify my qualification to my answer to the first question, except to note that the qualification depends on the assumptions made about how money is supplied in the relevant model of the economy. In a model in which money is endogenously supplied by private banks, Say’s Law holds; in a model in which the supply of money is fixed exogenously, Say’s Law does not hold. For more on this, see my paper, “A Reinterpretation of Classical Monetary Theory,” or my book Free Banking and Monetary Reform (pp. 62-66).

But if Hutt was right about Say’s Law, how can Keynes be right that cutting money wages is not a good way (but in Hutt’s view the best way) to cure a depression that is itself caused by the mispricing of assets and factors of production? The answer is that, for all the care Hutt exercised in working out his analysis, he was careless in making explicit his assumptions about the expectations of workers about future wages (i.e., the wages at which they would be able to gain employment). The key point is that if workers expect to be able to find employment at higher wages than they will in fact be offered, the aggregate supply curve of labor will intersect the aggregate demand curve for labor at a wage rate that is higher, and a quantity that is lower, than would be the case in an equilibrium in which workers’ expectations about future wages were correct. From the point of view of Hutt, there is a supply failure because the aggregate supply of labor is less than the hypothetical equilibrium supply under correct wage expectations. But there is no restriction on market pricing, just incorrect expectations of future wages. Expectations need not be rigid, but in a cumulative process, wage expectations may not adjust as fast as wages are falling. Though Keynes, himself, did not discuss the possibility explicitly, it is also possible that there could be multiple equilibria corresponding to different sets of expectations (e.g., optimistic or pessimistic). If the economy settles into a pessimistic equilibrium, unemployment could stabilize at levels that are permanently higher than those that would have prevailed under an optimistic set of expectations. Perhaps we are now stuck in (or approaching) such a pessimistic equilibrium.

Be that as it may, Hutt simply assumes that allowing all prices to be determined freely in unfettered markets must result in the quick restoration of a full-employment equilibrium. This is a reasonable position to take, but there is no way of proving it logically. Proofs that free-market adjustment leads to an equilibrium are based on some sort of tatonnement or recontracting process in which trading does not occur at disequilibrium prices. In the real world, there is no restriction on trading at disequilibrium process, so there is no logical argument that shows that the Say’s Law dynamic described by Hutt cannot go on indefinitely without reaching equilibrium. F. A. Hayek, himself, explained this point in his classic 1937 paper “Economics and Knowledge.”

In the light of our analysis of the meaning of a state of equilibrium it should be easy to say what is the real content of the assertion that a tendency toward equilibrium exists. It can hardly mean anything but that, under certain conditions, the knowledge and intentions of the different members of society are supposed to come more and more into agreement or, to put the same thing in less general and less exact but more concrete terms, that the expectations of the people and particularly of the entrepreneurs will become more and more correct. In this form the assertion of the existence of a tendency toward equilibrium is clearly an empirical proposition, that is, an assertion about what happens in the real world which ought, at least in principle, to be capable of verification. And it gives our somewhat abstract statement a rather plausible common-sense meaning. The only trouble is that we are still pretty much in the dark about (a) the conditionsunder which this tendency is supposed to exist and (b) the nature of the process by which individual knowledge is changed.

In the usual presentations of equilibrium analysis it is generally made to appear as if these questions of how the equilibrium comes about were solved. But, if we look closer, it soon becomes evident that these apparent demonstrations amount to no more than the apparent proof of what is already assumed[11] . The device generally adopted for this purpose is the assumption of a perfect market where every event becomes known instantaneously to every member. It is necessary to remember here that the perfect market which is required to satisfy the assumptions of equilibrium analysis must not be confined to the particular markets of all the individual commodities; the whole economic system must be assumed to be one perfect market in which everybody knows everything. The assumption of a perfect market, then, means nothing less than that all the members of the community even if they are not supposed to be strictly omniscient, are at least supposed to know automatically all that is relevant for their decisions. It seems that that skeleton in our cupboard, the “economic man,” whom we have exorcised with prayer and fasting, has returned through the back door in the form of a quasi-omniscient individual.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com