Posts Tagged 'Gerard Debreu'

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya’s discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

Thomas Piketty and Joseph Schumpeter (and Gerard Debreu)

Everybody else seems to have an opinion about Thomas PIketty, so why not me? As if the last two months of Piketty-mania (reminiscent, to those of a certain age, of an earlier invasion of American shores, exactly 50 years ago, by four European rock-stars) were not enough, there has been a renewed flurry of interest this week about Piketty’s blockbuster book triggered by Chris Giles’s recent criticism in the Financial Times of Piketty’s use of income data, which mainly goes to show that, love him or hate him, people cannot get enough of Professor Piketty. Now I will admit upfront that I have not read Piketty’s book, and from my superficial perusal of the recent criticisms, they seem less problematic than the missteps of Reinhart and Rogoff in claiming that, beyond a critical 90% ratio of national debt to national income, the burden of national debt begins to significantly depress economic growth. But in any event, my comments in this post are directed at Piketty’s conceptual approach, not on his use of the data in his empirical work. In fact, I think that Larry Summers in his superficially laudatory, but substantively critical, review has already made most of the essential points about Piketty’s book. But I think that Summers left out a couple of important issues — issues touched upon usefully by George Cooper in a recent blog post about Piketty — which bear further emphasis, .

Just to set the stage for my comments, here is my understanding of the main conceptual point of Piketty’s book. Piketty believes that the essence of capitalism is that capital generates a return to the owners of capital that, on average over time, is equal to the rate of interest. Capital grows; it accumulates. And the rate of accumulation is equal to the rate of interest. However, the rate of interest is generally somewhat higher than the rate of growth of the economy. So if capital is accumulating at a rate of growth equal to, say, 5%, and the economy is growing at a rate of growth equal to only 3%, the share of income accruing to the owners of capital will grow over time. It is in this simple theoretical framework — the relationship between the rate of economic growth to the rate of interest — that Piketty believes he has found the explanation not only for the increase in inequality over the past few centuries of capitalist development, but for the especially rapid increase in inequality over the past 30 years.

While praising Piketty’s scholarship, empirical research and rhetorical prowess, Summers does not handle Piketty’s main thesis gently. Summers points out that, as accumulation proceeds, the incentive to engage in further accumulation tends to weaken, so the iron law of increasing inequality posited by Piketty is not nearly as inflexible as Piketty suggests. Now one could respond that, once accumulation reaches a certain threshold, the capacity to consume weakens as well, if only, as Gary Becker liked to remind us, because of the constraint that time imposes on consumption.

Perhaps so, but the return to capital is not the only, or even the most important, source of inequality. I would interpret Summers’ point to be the following: pure accumulation is unlikely to generate enough growth in wealth to outstrip the capacity to increase consumption. To generate an increase in wealth so large that consumption can’t keep up, there must be not just a return to the ownership of capital, there must be profit in the Knightian or Schumpeterian sense of a profit over and above the return on capital. Alternatively, there must be some extraordinary rent on a unique, irreproducible factor of production. Accumulation by itself, without the stimulus of entrepreneurial profit, reflecting the the application of new knowledge in the broadest sense of the term, cannot go on for very long. It is entrepreneurial profits and rents to unique factors of production (or awards of government monopolies or other privileges) not plain vanilla accumulation that account for the accumulation of extraordinary amounts of wealth. Moreover, it seems that philanthropy (especially conspicuous philanthropy) provides an excellent outlet for the dissipation of accumulated wealth and can easily be combined with quasi-consumption activities, like art patronage or political activism, as more conventional consumption outlets become exhausted.

Summers backs up his conceptual criticism with a powerful factual argument. Comparing the Forbes list of the 400 richest individuals in 1982 with the Forbes list for 2012 Summers observes:

When Forbes compared its list of the wealthiest Americans in 1982 and 2012, it found that less than one tenth of the 1982 list was still on the list in 2012, despite the fact that a significant majority of members of the 1982 list would have qualified for the 2012 list if they had accumulated wealth at a real rate of even 4 percent a year. They did not, given pressures to spend, donate, or misinvest their wealth. In a similar vein, the data also indicate, contra Piketty, that the share of the Forbes 400 who inherited their wealth is in sharp decline.

But something else is also going on here, a misunderstanding, derived from a fundamental ambiguity, about what capital actually means. Capital can refer either to a durable physical asset or to a sum of money. When economists refer to capital as a factor of production, they are thinking of capital as a physical asset. But in most models, economists try to simplify the analysis by collapsing the diversity of the entire stock of heterogeneous capital assets into single homogeneous substance called “capital” and then measure it not in terms of its physical units (which, given heterogeneity, is strictly impossible) but in terms of its value. This creates all kinds of problems, leading to some mighty arguments among economists ever since the latter part of the nineteenth century when Carl Menger (the first Austrian economist) turned on his prize pupil Eugen von Bohm-Bawerk who wrote three dense volumes discussing the theory of capital and interest, and pronounced Bohm-Bawerk’s theory of capital “the greatest blunder in the history of economics.” I remember wanting to ask F. A. Hayek, who, trying to restate Bohm-Bawerk’s theory in a coherent form, wrote a volume about 75 years ago called The Pure Theory of Capital, which probably has been read from cover to cover by fewer than 100 living souls, and probably understood by fewer than 20 of those, what he made of Menger’s remark, but, to my eternal sorrow, I forgot to ask him that question the last time that I saw him.

At any rate, treating capital as a homogeneous substance that can be measured in terms of its value rather than in terms of physical units involves serious, perhaps intractable, problems. For certain purposes, it may be worthwhile to ignore those problems and work with a simplified model (a single output which can be consumed or used as a factor of production), but the magnitude of the simplification is rarely acknowledged. In his discussion, Piketty seems, as best as I could determine using obvious search terms on Amazon, unaware of the conceptual problems involved in speaking about capital as a homogeneous substance measured in terms of its value.

In the real world, capital is anything but homogeneous. It consists of an array of very specialized, often unique, physical embodiments. Once installed, physical capital is usually sunk, and its value is highly uncertain. In contrast to the imaginary model of a homogenous substance that just seems to grow at fixed natural rate, the real physical capital that is deployed in the process of producing goods and services is complex and ever-changing in its physical and economic characteristics, and the economic valuations associated with its various individual components are in perpetual flux. While the total value of all capital may be growing at a fairly steady rate over time, the values of the individual assets that constitute the total stock of capital fluctuate wildly, and few owners of physical capital have any guarantee that the value of their assets will appreciate at a steady rate over time.

Now one would have thought that an eminent scholar like Professor Piketty would, in the course of a 700-page book about capital, have had occasion to comment on enormous diversity and ever-changing composition of the stock of physical capital. These changes are driven by a competitive process in which entrepreneurs constantly introduce new products and new methods of producing products, a competitive process that enriches some owners of new capital, and, it turns out, impoverishes others — owners of old, suddenly obsolete, capital. It is a process that Joseph Schumpeter in his first great book, The Theory of Economic Development, memorably called “creative destruction.” But the term “creative destruction” or the title of Schumpeter’s book does not appear at all in Piketty’s book, and Schumpeter’s name appears only once, in connection not with the notion of creative destruction, but with his, possibly ironic, prediction in a later book Capitalism, Socialism and Democracy that socialism would eventually replace capitalism.

Thus, Piketty’s version of capitalist accumulation seems much too abstract and too far removed from the way in which great fortunes are amassed to provide real insight into the sources of increasing inequality. Insofar as such fortunes are associated with accumulation of capital, they are likely to be the result of the creation of new forms of capital associated with new products, or new production processes. The creation of new capital simultaneously destroys old forms of capital. New fortunes are amassed, and old ones dissipated. The model of steady accumulation that is at the heart of Piketty’s account of inexorably increasing inequality misses this essential feature of capitalism.

I don’t say that Schumpeter’s account of creative destruction means that increasing inequality is a trend that should be welcomed. There may well be arguments that capitalist development and creative destruction are socially inefficient. I have explained in previous posts (e.g., here, here, and here) why I think that a lot of financial-market activity is likely to be socially wasteful. Similar arguments might be made about other kinds of activities in non-financial markets where the private gain exceeds the social gain. Winner-take-all markets seem to be characterized by this divergence between private and social benefits and costs, apparently accounting for a growing share of economic activity, are an obvious source of inequality. But what I find most disturbing about the growth in inequality over the past 30 years is that great wealth has gained increased social status. That seems to me to be a very unfortunate change in public attitudes. I have no problem with people getting rich, even filthy rich. But don’t expect me to admire them because they are rich.

Finally, you may be wondering what all of this has to do with Gerard Debreu. Well, nothing really, but I couldn’t help noticing that Piketty refers in an endnote (p. 654) to “the work of Adam Smith, Friedrich Hayek, and Kenneth Arrow and  Claude Debreu” apparently forgetting that the name of his famous countryman, winner of the Nobel Memorial Prize for Economics in 1983, is not Claude, but Gerard, Debreu. Perhaps Piketty confused Debreu with another eminent Frenchman Claude Debussy, but I hope that in the next printing of his book, Piketty will correct this unfortunate error.

UPDATE (5/29 at 9:46 EDST): Thanks to Kevin Donoghue for checking with Arthur Goldhammer, who translated Piketty’s book from the original French. Goldhammer took responsibility for getting Debreu’s first name wrong in the English edition. In the French edition, only Debreu’s last name was mentioned.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com