Posts Tagged 'Kenneth Arrow'

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya’s discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

Armen Alchian, The Economists’ Economist

The first time that I ever heard of Armen Alchian was when I took introductory economics at UCLA as a freshman, and his book (co-authored with his colleague William R. Allen who was probably responsible for the macro and international chapters) University Economics (the greatest economics textbook ever written) was the required text. I had only just started to get interested in economics, and was still more interested in political philosophy than in economics, but I found myself captivated by what I was reading in Alchian’s textbook, even though I didn’t find the professor teaching the course very exciting. And after 10 weeks (the University of California had switched to a quarter system) of introductory micro, I changed my major to economics. So there is no doubt that I became an economist because the textbook that I was taught from was written by Alchian.

In my four years as an undergraduate at UCLA, I took three classes from Axel Leijonhufvud, two from Ben Klein, two from Bill Allen, and one each from Robert Rooney, Nicos Devletoglou, James Buchanan, Jack Hirshleifer, George Murphy, and Jean Balbach. But Alchian, who in those days was not teaching undergrads, was a looming presence. It became obvious that Alchian was the central figure in the department, the leader and the role model that everyone else looked up to. I would see him occasionally on campus, but was too shy or too much in awe of him to introduce myself to him. One incident that I particularly recall is when, in my junior year, F. A. Hayek visited UCLA in the fall and winter quarters (in the department of philosophy!) teaching an undergraduate course in the philosophy of the social sciences and a graduate seminar on the first draft of Law, Legislation and Liberty. I took Hayek’s course on the philosophy of the social sciences, and audited his graduate seminar, and I occasionally used to visit his office to ask him some questions. I once asked his advice about which graduate programs he would suggest that I apply to. He mentioned two schools, Chicago, of course, and Princeton where his friends Fritz Machlup and Jacob Viner were still teaching, before asking, “but why would you think of going to graduate school anywhere else than UCLA? You will get the best training in economics in the world from Alchian, Hirshleifer and Leijonhufvud.” And so it was, I applied to, and was accepted at, Chicago, but stayed at UCLA.

As a first year graduate student, I took the (three-quarter) microeconomics sequence from Jack Hirshleifer (who in the scholarly hierarachy at UCLA ranked only slightly below Alchian) and the two-quarter macroeconomics sequence from Leijonhufvud. Hirshleifer taught a great course. He was totally prepared, very organized and his lectures were always clear and easy to follow. To do well, you had to sit back listen, review the lecture notes, read through the reading assignments, and do the homework problems. For me at least, with the benefit of four years of UCLA undergraduate training, it was a breeze.

Great as Hirshleifer was as a teacher, I still felt that I was missing out by not having been taught by Alchian. Perhaps Alchian felt that the students who took the microeconomics sequence from Hirshleifer should get some training from him as well, so the next year he taught a graduate seminar in topics in price theory, to give us an opportunity to learn from him how to do economics. You could also see how Alchian operated if you went to a workshop or lecture by a visiting scholar, when Alchian would start to ask questions. He would smile, put his head on his forehead, and say something like, “I just don’t understand that,” and force whoever it was to try to explain the logic by which he had arrived at some conclusion. And Alchian would just keep smiling, explain what the problem was with the answer he got, and ask more questions. Alchian didn’t shout or rant or rave, but if Alchian was questioning you, you were not in a very comfortable position.

So I was more than a bit apprehensive going into Alchian’s seminar. There were all kinds of stories told by graduate students about how tough Alchian could be on his students if they weren’t able to respond adequately when subjected to his questioning in the Socratic style. But the seminar could not have been more enjoyable. There was give and take, but I don’t remember seeing any blood spilled. Perhaps by the time I got to his seminar, Alchian, then about 57, had mellowed a bit, or, maybe, because we had all gone through the graduate microeconomics sequence, he felt that we didn’t require such an intense learning environment. At any rate, the seminar, which met twice a week for an hour and a quarter for 10 weeks, usually involved Alchian picking a story from the newspaper and asking us how to analyze the economics underlying the story. Armed with nothing but a chalkboard and piece of chalk, Alchian would lead us relatively painlessly from confusion to clarity, from obscurity to enlightenment. The key concepts with which to approach any problem were to understand the choices available to those involved, to define the relevant costs, and to understand the constraints under which choices are made, the constraints being determined largely by the delimitation of the property rights under which the resources can be used or exchanged, or, to be more precise, the property rights to use those resources can be exchanged.

Ultimately, the lesson that I learned from Alchian is that, at its best, economic theory is a tool for solving actual real problems, and the nature of the problem ought to dictate the way in which the theory (verbal, numerical, graphical, higher mathematical) is deployed, not the other way around. The goal is not to reach any particular conclusion, but to apply the tools in the best and most authentic way that they can be applied. Alchian did not wear his politics on his sleeve, though it wasn’t too hard to figure out that he was politically conservative with libertarian tendencies. But you never got the feeling that his politics dictated his economic analysis. In many respects, Alchian’s closest disciple was Earl Thompson, who studied under Alchian as an undergraduate, and then, after playing minor-league baseball for a couple of years, going to Harvard for graduate school, eventually coming back to UCLA as an assistant professor where he remained for his entire career. Earl, discarding his youthful libertarianism early on, developed many completely original, often eccentric, theories about the optimality of all kinds of government interventions – even protectionism – opposed by most economists, but Alchian took them all in stride. Mere policy disagreements never affected their close personal bond, and Alchian wrote the forward to Earl’s book with Charles Hickson, Ideology and the Evolution of Vital Economics Institutions. If Alchian was friendly with and an admirer of Milton Friedman, he just as friendly with, and just as admiring of, Paul Samuelson and Kenneth Arrow, with whom he collaborated on several projects in the 1950s when they consulted for the Rand Corporation. Alchian cared less about the policy conclusion than he did about the quality of the underlying economic analysis.

As I have pointed out on several prior occasions, it is simply scandalous that Alchian was not awarded the Noble Prize. His published output was not as voluminous as that of some other luminaries, but there is a remarkably high proportion of classics among his publications. So many important ideas came from him, especially thinking about economic competition as an evolutionary process, the distinction between the functional relationship between cost and volume of output and cost and rate of output, the effect of incomplete information on economic action, the economics of property rights, the effects of inflation on economic activity. (Two volumes of his Collected Works, a must for anyone really serious about economics, contain a number of previously unpublished or hard to find papers, and are available here.) Perhaps in the future I will discuss some of my favorites among his articles.

Although Alchian did not win the Nobel Prize, in 1990 the Nobel Prize was awarded to Harry Markowitz, Merton Miller, and William F. Sharpe for their work on financial economics. Sharp, went to UCLA, writing his Ph.D. dissertation on securities prices under Alchian, and worked at the Rand Corporation in the 1950s and 1960s with Markowitz.  Here’s what Sharpe wrote about Alchian:

Armen Alchian, a professor of economics, was my role model at UCLA. He taught his students to question everything; to always begin an analysis with first principles; to concentrate on essential elements and abstract from secondary ones; and to play devil’s advocate with one’s own ideas. In his classes we were able to watch a first-rate mind work on a host of fascinating problems. I have attempted to emulate his approach to research ever since.

And if you go to the Amazon page for University Economics and look at the comments you will see a comment from none other than Harry Markowitz:

I am about to order this book. I have just read its quite favorable reviews, and I am not a bit surprised at their being impressed by Armen Alchian’s writings. I was a colleague of Armen’s, at the Rand Corporation “think tank,” during the 1950s, and hold no economist in higher regard. When I sat down at my keyboard just now it was to find out what happened to Armen’s works. One Google response was someone saying that Armen should get a Nobel Prize. I concur. My own Nobel Prize in Economics was awarded in 1990 along with the prize for Wm. Sharpe. I see in Wikipedia that Armen “influenced” Bill, and that Armen is still alive and is 96 years old. I’ll see if I can contact him, but first I’ll buy this book.

I will always remember Alchian’s air of amused, philosophical detachment, occasionally bemused (though, perhaps only apparently so, as he tried to guide his students and colleagues with question to figure out a point that he already grasped), always curious, always eager for the intellectual challenge of discovery and problem solving. Has there ever been a greater teacher of economics than Alchian? Perhaps, but I don’t know who. I close with one more quotation, this one from Axel Leijonhufvud written about Alchian 25 years ago.  It still rings true.

[Alchian’s] unique brand of price theory is what gave UCLA Economics its own intellectual profile and achieved for us international recognition as an independent school of some importance—as a group of scholars who did not always take their leads from MIT, Chicago or wherever. When I came here (in 1964) the Department had Armen’s intellectual stamp on it (and he remained the obvious leader until just a couple of years ago ….). Even people outside Armen’s fields, like myself, learned to do Armen’s brand of economic analysis and a strong esprit de corps among both faculty and graduate students sprang from the consciousness that this ‘New Institutional Economics’ was one of the waves of the future and that we, at UCLA, were surfing it way ahead of the rest. But Armen’s true importance to the UCLA school did not stem just from the new ideas he taught or the outwardly recognized “brandname” that he created for us. For many of his young colleagues he embodied qualities of mind and character that seemed the more important to seek to emulate the more closely you got to know him.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com