Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya’s discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

37 Responses to “Another Complaint about Modern Macroeconomics”


  1. 1 Marcus Nunes July 15, 2014 at 4:22 pm

    David
    Great post.(I feel so much more “confortable”)

    Like

  2. 2 Jason Smith July 15, 2014 at 4:49 pm

    Interesting post. It gives me a better sense of what is going on regarding methodology in economics.

    As a former practicing particle physicist, I can say that axiomatic quantum field theory was viewed on par with people who build elaborate model trains.

    The only reason to write axioms for e.g. derivatives dy/dx is in order to apply the concept more generally … To curved manifolds or fractal spaces. But the motivation to do so was empirical success in continuum mechanics. Basically, economics should only build axioms after it has empirical success *and* wants to move outside of economics into … I don’t know … Ecology or other social sciences.

    If you invent a tool like a hammer, you can continue just fine using it for the original purpose eg nails in wood. But if you want to use it for another purpose, like for a diamond chisel, only then should you figure out in detail how it works.

    Like

  3. 3 Min July 15, 2014 at 6:11 pm

    As a non-economist, I welcome the axiomatization of economics because it lays bare wacko assumptions that economists make. Sorry to sound disparaging, but I have been following economics blogs for a few years now, and was surprised to find that much of it seems like scholasticism to me.

    I agree that in the main science does not advance by logic.

    Like

  4. 4 Brian Romanchuk July 15, 2014 at 6:32 pm

    I did my doctorate in Control Systems engineering, which is a branch of applied mathematics. My feeling is that the mathematical modelling used in economics has a lot of parallels to that field. But control systems is not a “science”, rather is a study of the properties of mathematical models of a particular form (a target system and a controller). The models themselves are derived from the physical properties of the systems, either based on mechanics, electronics, or chemistry.

    The advantage of mathematisation is that the results are precise, and thus should be testable. But it makes no sense to confuse a mathematical model with a real world system.

    In my view, this is where DSGE modelling breaks down. Fairly mystical properties are imputed to a mathematical model, (“rationality”, “complete markets”) but the models are not properly solved. Instead a kludgy linearisation step is taken, which wipes out most of the mathematical constraints that allegedly exist. Since the true solution is unavailable, researchers just make conjectures about the properties of the solution under certain conditions (what happens when rates hit zero?). Thus the situation is even less clear than would be the case if you relied on verbal descriptions of the economy.

    Like

  5. 5 Blue Aurora July 15, 2014 at 7:36 pm

    David Glasner: Did you actually major in mathematics as an undergraduate?

    Like

  6. 6 Kevin H July 15, 2014 at 8:58 pm

    Very thought-provoking post. The “Mark Blaug” hyperlink links to an amazon page for a book by Berkley, I assume that it shouldn’t? With regards to how far this has been carried in other disciplines, I was reading an interview with philosopher John Searle, and this bit stuck out:

    Q: What’s your view of the state of philosophy at the moment?
    A: I think it’s in terrible shape! […] Well, what has happened in the subject I started out with, the philosophy of language, is that, roughly speaking, formal modeling has replaced insight. My own conception is that the formal modeling by itself does not give us any insight into the function of language.

    No, this isn’t an empirical area of study, but replace a few words here and there and I think this could describe the state of economics reasonably well.

    Like

  7. 7 Wei-Yang Tham July 16, 2014 at 12:14 am

    David, your target in this post was macroeconomics. Do you see this as a problem at all in other fields, say, micro theory?

    Like

  8. 8 Kyle Chadwick July 16, 2014 at 9:59 am

    This is an appeal to authority. Didn’t Popper’s view have any detractors?

    Like

  9. 9 David Glasner July 17, 2014 at 9:40 am

    Marcus, Thanks, glad you feel that way.

    Jason, Good to hear that physicists properly assess the contribution of axiomatic quantum field theory.

    Min, Which wacko assumptions are you referring to?

    Brian, Thanks for that helpful assessment of DSGE models.

    Blue Aurora, No I was an econ major. I took two years of calculus including linear algebra, a year of probability, and game theory and linear programming. So I am barely numerate.

    Kevin, Thanks for flagging the bad link, which I have corrected. I greatly admire John Searle, and it is nice to see that we seem to be on the same page.

    Wei-Yang Tham, I think that modern micro theory suffers from many of the same problems, but at least micro theory is better worked out than macro theory. That would be a reason to reduce macro theory to micro theory if such a reduction had been achieved, but it hasn’t. Modern macro pretends that there has been a reduction and pretends to be doing science by axiomatising its faux reduction.

    Kyle, I don’t think so. I have simply shown that Popper dismissed axiomatization on logical and scientific grounds. You, or anyone else, are free to show what is wrong with his historical facts or his philosophical argument.

    Like

  10. 10 Ilya July 17, 2014 at 1:26 pm

    Wonderful post David!

    You mention in the comments that you have minimal training in mathematics. Do you have trouble reading mainstream macro papers, which have so much math in them?

    Like

  11. 11 Min July 17, 2014 at 2:48 pm

    Thanks for asking. 🙂 I will be glad to hear your response, but I have made it a rule to avoid arguing on economics blogs, if possible. My purpose is to learn. So I do not expect to get into any deep discussion here.

    For starters, how about the meta-wacko-assumption assumption of Milton Friedman’s that assumptions can be wacko if they lead to good predictions. I have some sympathy for that claim, as it can be difficult to come up with reasons for empirical results. For instance, how does gravity work? Is there action at a distance (a wacko assumption on its face)? But if your assumptions are testable, they are rightly as open to testing as your predictions. Not that much of economics is tested empirically, anyway.

    Along those lines, take the claim that competing theories cannot be tested against data because the theories can be tweaked to fit the data. That hasn’t stopped the machine learning people, who fit theories to part of the data and test them against the rest. You can even do it in several ways.

    How about the idea of utility maximization, or profit maximization? Seasoned poker players know that that’s a good way to go bust. Business people know that it’s a way to go bankrupt. Greed is not good. Besides, there is good math, going back to Bernoulli, that shows that you need to consider your bankroll before taking on risk.

    How about Rational Expectations or Ricardian Equivalence? I think it was in 2009 (maybe 2010) that I first encountered the argument on economic blogs that gov’t stimulus would do not any good because it would mean future taxes and people would save the stimulus money against those taxes instead of spending it. That was so obviously wacko that I could scarcely believe it, but nobody was pointing out the absurdity. So I did, despite my general ignorance of economics. First, there is no day of reckoning, except perhaps when there is a social upheaval that dwarfs the question of taxes. (I later found out that Adam Smith, no less, pointed out that gov’ts roll over their debt. And that was during gold standard days.) Second, even if you consider it to be rational to set the stimulus money aside to pay future taxes (even if they are paid for by your distant heirs), people do not act that way. You do not have to be a psychologist to see that. But maybe you have to be an economist not to see it. Third, even if you grant the argument, there are people who will simply not be able to save any stimulus money that you put in their hands, because they are too poor. The Rational Expectations argument is an argument for giving money to the poor. What economist pointed that out? None that I saw or heard of at that time. Perhaps that was because of the not so wacko assumption of representative agents. That is an assumption that I have sympathy with, because I know that it is not easy to build good models. But it is an assumption of convenience. Let’s get real.

    Enough. You get the idea. 🙂

    Like

  12. 12 Krzysztof Wolyniec July 17, 2014 at 10:32 pm

    All those quotes from Popper are a bit long-winded, but we might just jump to pretty well known examples from (quantum) physics that neatly illustrate the point:

    Dirac’s concept of the state (the ket) had no clear mathematical foundations until much later when Von Neuman managed to axiomatize the concepts with all the Hilbert space entertainment.

    Same thing with Dirac’s delta, which at the time of its introduction was seen as mathematically contradictory till the late 40’s/early 50’s when distribution spaces were introduced by Schwartz to clarify the concept.

    The Hamiltonian of Hydrogen atom was not proved to be self-adjoint till the 50’s!

    And so on and so forth. None of those mathematical traps stopped anybody from using the concepts for decades with a lot of success before the language got clarified.

    It’s especially laughable to insist on axiomatization in econ where the data sources are so sparse and complexity cannot resolve the noise at all.

    Like

  13. 13 Tom Brown July 17, 2014 at 11:28 pm

    Min, I’m curious, was this a typo?

    “For starters, how about the meta-wacko-assumption assumption of Milton Friedman’s that assumptions can be wacko if they lead to good predictions.”

    Did you mean to write “can’t be wacko?”

    Actually I don’t follow the logic of that paragraph, typo or not. First I’ll assume a typo:

    You state MF makes a wacko assumption that good predictions imply non-wacko assumptions. You say you have sympathy for MF’s claim, and give an example in support (gravity). Gravity appears at first to have a wacko assumption, but good predictions imply that the assumption is not wacko: thus supporting MF’s claim (because we all know and love gravity). Now the “but” I was waiting for: but testable assumptions imply testable predictions [again, I’m a bit confused: if we can test assumptions directly, then why test predictions? This makes more sense the other way around doesn’t it?: testable predictions imply testable assumptions (via indirectly testing the predictions)]. Then you state that not much of economics is tested empirically. I never figured out how MF’s assumptions were wacko here. Can you elaborate?

    Assuming you didn’t make a typo, MF’s statement seems very odd: good predictions imply the possibility of wacko assumptions? But surely bad predictions also imply this, so MF’s statement appears meaningless. Is that the wacko part? Why the rest of the paragraph then?

    Like

  14. 14 Min July 18, 2014 at 7:00 am

    Tom, no typo. Friedman said that wacko assumptions can be OK if they lead to good predictions. See “The methodology of positive economics” (1966), to see what I am talking about. For an absurd example, suppose that I predicted that Obama would beat Romney because the Obama is richer than Romney. Obama won, but the assumption that he is richer than Romney is testable, and wrong. Just because the prediction was right is not enough to justify the assumption. Any number of people have ridiculed Friedman’s assertion.

    Like

  15. 15 Alek July 18, 2014 at 8:02 am

    I feel there is a major justification for the axiomatization of economic theory in that the axioms we use seem to play the role or introducing moral content into our equations.

    Much of the greatest achievements of Game theory (especially cooperative game theory and voting theory) and Equilibrium Theory (like in Debreu or earlier in Hicks) is that positive results are tied to normative principles.

    Now we are able to debate the morality and know where to introduce it, so that our equations are known to stray from morality when they violate our axioms.

    It is often tricky for the economist to remember that his equations are not morally neutral, even when including something into a model just to make it more logically consistent (Nash bargaining in monetarist models, or sticky wages in new-keynesian DSGE mdoels).

    in conclusion i think there is too much math and axioms in economics, but not for any of the reasons you talk about 😉

    Like

  16. 16 Tom Brown July 18, 2014 at 9:30 am

    Min, thanks. That all makes sense now.

    Like

  17. 17 David Glasner July 18, 2014 at 9:41 am

    Ilya, Thanks. I wouldn’t call it exactly minimal; I was being facetious when I said that I’m barely numerate, but many papers are either over my head or would require too much time for me to work through carefully, so I try to see if I can understand the basic intuition without worrying about the derivations.

    Min, As was often the case, Friedman had a point when he warned against being overly concerned about whether assumptions are strictly accurate, but went on to exaggerate its significance. Sometimes a false assumption will work. This is another version of Popper’s point about precision. Some physics problems can be solved well by assuming (falsely) that there is no friction. Some physics problems can’t be solved unless you assume that there is friction. It depends.

    There seems to be a paradox lurking in you paragraph about profit maximization. If profit maximization leads to bankruptcy, something has gone awry and I can’t figure out exactly where, but I think it’s that you have misunderstood what profit maximization entails.

    I have criticized rational expectations many times on this blog, and I don’t think that Ricardian equivalence is that widely accepted. At any rate, Ricardian equivalence is a theorem not an assumption.

    Alek, Most economists sill at least give lip-service to Max Weber’s principle of value-neutrality in the social sciences.

    Like

  18. 18 Alek July 18, 2014 at 12:39 pm

    hmm,value-neutrality never came up once in grad school, and definitely never in any workshops.

    but had someone ever claimed to be objectively value-neutral in their models, i hope i would have told them to think harder about what pareto optimality means.

    any social objective function by definition takes a moral stance, and i posit that stance is mostly encapsulated in the axioms they satisfy.

    Like

  19. 19 Min July 18, 2014 at 3:59 pm

    David Glasner: “There seems to be a paradox lurking in you paragraph about profit maximization. If profit maximization leads to bankruptcy, something has gone awry and I can’t figure out exactly where, but I think it’s that you have misunderstood what profit maximization entails.”

    Sorry, I was writing too quickly. First, I am talking about expected utility and profit maximization. I remember when, as a young card player, I started playing for real money. Whereas before I had pressed for the maximum on every hand, I realized that that was not a good strategy over the long run. I had to learn money management. The right idea, I now think, is to maximize return on investment, which is not the same thing. The idea of not trying to maximize profits is something that I ran across in passing not long ago. It fit in with what I already thought about managing risk and with how real people actually behave, so I did not make special note of it. It also fit in with what Keynes wrote about business risk in his “Treatise on Probability”.

    Like

  20. 20 Min July 18, 2014 at 4:07 pm

    As for a lurking paradox, how about the St. Petersburg Paradox? 🙂

    Like

  21. 21 Tom Brown July 18, 2014 at 6:29 pm

    Alek, here’s the “hard core” (just two very simple items) of an economic theory:

    http://informationtransfereconomics.blogspot.com/2014/06/hard-core-information-transfer-economics.html

    Can you tell me where the morality comes into play there, for example?

    Like

  22. 22 Kevin H July 18, 2014 at 8:18 pm

    I agree with the thrust of Popper’s historical notes about mathematics, but I don’t understand why he believes that ‘what-is’ questions are useless. From a philosophical point of view, was Socrates/Plato really just wasting his time wondering what justice or love is? The vast majority of historians of philosophy (and myself, though I’m hardly a historian of philosophy) would consider these discussions to be very important, I think. From an economic view, surely there’s at least some value to discussing questions such as ‘what is money’ and ‘what is a bank’ for example. I also note that Godel’s incompleteness theorems were an absolutely foundational advance in logic that could only have happened after mathematics was axiomatized; although it took the foundations of mathematics in a different direction than logicists and formalists like Russell and Hilbert respectively had expected, it by no means disproved the value of their programs. The foundations of mathematics continues to be an important research area today.

    Also, Mr. Glaesner, a theme on your blog is that economists today place too much emphasis on mathematization and axiomatization; what is it that you feel they most neglect in pursuit of this? History of thought and economic history would be the most obvious suggestions, I think.

    Like

  23. 23 Min July 19, 2014 at 11:05 am

    There is a problem with “what is” questions in science, particularly in social sciences. What is anxiety, for instance. In terms of philosophy or clinical psychology, anxiety very much has to do with lived human experience. Science prefers operational definitions. So anxiety is reduced to a score on a test or questionnaire, or to the answer to the question, “On a scale of 1 to 10, how much anxiety are you feeling right now?” Such reductions are unsatisfying in a way, because they leave out the essence of the experience. OTOH, they have a certain usefulness, which is why we make them.

    Like

  24. 24 Tom Brown July 19, 2014 at 1:07 pm

    Min, if we are to believe folks like Michio Kaku then our brains are state machines, the states and structure of which can be recorded and even played back on other media. If that’s true, then “anxiety” might someday be identified as a class of brain states, observable by using a medical imaging device. Kaku describes recent progress which is pretty astounding: uploading false memories into mouse brains (to allow them to more easily navigate a maze they’ve never been in before), determining which number someone is thinking by imaging their brain, etc.

    If that’s all true, and a copy of someone’s brain running on a computer is indistinguishable in it’s responses to the original human to all outside observers, wouldn’t that be strong evidence for Kaku’s assertion? I.e. that there’s nothing more to it than the state machine?

    Assuming (for the sake of argument) that Kaku is correct, then why can’t we, in principle, someday answer the question “What is anxiety?” I’m pretty sure you have an answer, but I’m curious what it is.

    Like

  25. 25 Tom Brown July 19, 2014 at 2:43 pm

    David & Min, regarding my question to Alek above, and in light of Min’s comments about “wacko assumptions,” I think Jason’s theory can serve as a good concrete example to apply some of these ideas to.

    Now perhaps Jason turns out to be a crackpot (sorry Jason, I had to cover that possibility), and his theory is false, his logic is flawed, his interpretation of evidence is off or his assumptions are “wacko.” But still, this gives us a great opportunity to apply what you’re saying here in an actual case, don’t you think? Especially since the “hard core” of Jason’s theory is so brief and simple (see my question to Alek above), and it’s very different than any other other economic theory (AFAIK), and he’s spent the last year (from its inception) comparing it against empirical data and existing mainstream theories (it’s important to Jason, that his theory have a high degree of correspondence with existing mainstream theories).

    Now what can Jason’s theory add? Is it just a curiosity? Jason covers that here in a response to Sumner asking these same questions.

    Here’s a series of three brief recent posts which serve as good examples of him taking a look at the empirical data. In particular he compares a model of inflation resulting from his theory to the data and to forecasts based on TIPS spreads (I’ll just link to #3: there’s links at the top of #3 and #2 to the preceding related posts):

    http://informationtransfereconomics.blogspot.com/2014/07/us-inflation-predictions.html

    And this is a particularly bold claim he makes in post #2:

    “That is a pretty startling piece of information. It means you likely can’t do any better than the information transfer model in predicting inflation in the medium term (5-15 years out).”

    In post #1, he claims he does about 5x better at forecasting inflation than one could do using TIPS spreads (that analysis was prompted by another question from Sumner).

    So Min, what do you make of Jason’s assumptions in light of your above comments? What do you make of his claims? I’m certainly not qualified to evaluate them: I’m neither an economist nor a scientist, but I’m a fan of Jason’s approach because it *seems* scientific to me (plus it’s a little bit easier for a layman such as myself to get up to speed on it). Jason claims that he’s not interested in “adding epi-cycles” to the theory, and he’s put himself out there in terms of providing a path to falsification (e.g. he makes a prediction for Canada in late 2015). Of course, it’s not his day job, so that helps! 😀

    Even if his theory is ultimately falsified, I wonder if this process he’s going through serves as a good model for how economic theories should progress and be evaluated. I’m ignorant of how this process has gone in the past in economics, so perhaps it’s not a particularly good example. I don’t know. As far as I can tell, it doesn’t happen much on econ blogs. Is that’s what’s going on in the econ journals I haven’t read?

    Like

  26. 26 Blue Aurora July 20, 2014 at 1:41 am

    David Glasner: I see. Well, hasn’t it been since the early 1980ies, that you couldn’t go to any top Ph.D. programme in Economics without at least half an undergraduate major in Mathematics? J.M. Keynes and Alfred Marshall both read Mathematics as undergraduates at the University of Cambridge. I don’t mean to meander off-topic, sir, but…did you also get my last e-mail?

    Like

  27. 27 Min July 20, 2014 at 3:38 am

    Tom, I am also unqualified to say much about Jason’s ideas. Kaku is a bit of a show man, which is not a criticism. Can we model the brain by finite state machines? I am pretty sure that we can. IIUC, finite state machines are Turing complete, so we can use them to model anything that performs what we call a computation. And we do know a good bit about the chemistry and neurophysiology of anxiety. I have seen a film where, back in the 1960s, Dr. Jose Delgado implanted an electrode in the brain of a bull to induce fear. Then he got into the bull ring with the bull and when the bull came at him — not really charging, but sort of trotting — Delgado pointed his sword at the bull and stopped him in his tracks. By pressing a button on the hilt of the sword Delgado sent a radio signal that activated the electrode. An impressive demonstration, eh?

    But anyway, I don’t think you ever get rid of the question of reductionism.

    Like

  28. 28 Min July 20, 2014 at 4:38 am

    About reductionism, I thought of an analogy. Fourier analysis tells us that we can use tuning forks with amplification to reproduce the sound of the New York Philharmonic. That does not mean that the New York Philharmonic is a set of tuning forks. 😉

    Like

  29. 29 Tom Brown July 20, 2014 at 4:09 pm

    Min, I like the bullfighting story. Reductionism: the interesting thing here is that’s the opposite of Jason’s approach: he’s treating the macro economy in his theory as an emergent system:

    http://en.wikipedia.org/wiki/Emergence

    Probably “weak” rather than “strong” in that I doubt he’d claim that no micro properties remain at the macro level.

    Like

  30. 30 Benjamin Cole July 21, 2014 at 12:05 am

    Excellent blogging.

    I first modeled the economy using…Fortran cards. If you are old enough to remember, then you are wearing reading glasses right now.

    Models, where to start? Cherry-picking is a big problem. Start and stop dates, etc. Ignore some nations. Always ignore Japan.

    Then what is inflation? Unemployment? Accurately measured?

    But the biggest clue that models are not really trustworthy: Models always yield results that mirror the political biases of the modelers.

    Sad to say, macroeconomics has become politics in drag.

    Why pretend?

    It is simply not PC for any right-wing economist to say tight money caused a depression/recession anymore. Milton Friedman did, three times (Great Depression, 1990s Japan, and 1958 USA recession).

    Ergo, no right-winger today will develop a model showing that tight money caused the 2008 recession.

    Like

  31. 31 David Glasner July 23, 2014 at 2:55 pm

    Alek, I think value-neutrality corresponds roughly to the distinction between positive and normative economics. Even in normative economics, economists prefer what seems to them to be the weakest assumptions about values which is that interpersonal utility comparisons are beyond the scope of economics so that the only unambiguous policy recommendations that economists make are in favor of policies that result in Pareto-improvements over the status quo. There may be hidden value assumptions in the preference for Pareto-improving policies, but the motivation is to avoid making value judgments.

    Min, I think your conception of profit maximization has to be suitably adjusted to take into account of the possibility of bankruptcy. In other words, I think your objection is merely a quibble about how to implement the appropriate optimization criterion, not an objection in principle.

    Kevin, I think that there is a certain ambiguity in Popper’s criticism of what-is questions. Obviously, when we talk about something we want to have some idea of what we are talking about. However, it is a mistake to think that there any uniquely correct or valid definition of justice that we can discover by philosophical analysis. That idea was behind the Platonic notion of ideal forms which Popper argued against. Yes we want to have some idea of what money is and especially what money does, but it is certainly a mistake to think that money has some essential character that can be discovered by philosophizing about its ultimate essence.

    I am not against axiomatization, and I don’t say that it is not worthwhile, but I am against the current prejudice that only axiomatized models are legitimate and that economic theories must be axiomatized before they can be published in a decent economics journal. And I am not against mathematics; models can be mathematical without being axiomatized. But I certainly think that economics would benefit greatly if economists had a greater interest in historical problems and in the history of their discipline.

    Tom, Jason seems awfully smart, but I must admit that I haven’t been able to figure out what his model is all about.

    Blue Aurora, I’m not sure how much math is required by most Ph.D. programs, but there is no doubt that the amount of math that new economics PhDs are supposed to know has been steadily increasing since that 1960s. Keynes was an accomplished mathematician as is shown by his great work on the theory of probability. I did get your email and apologize for not responding. I will try to get to it soon. Feel free to send me another one to remind me.

    Benjamin, One of the reasons that I started this blog was because I was upset at the extent to which economic commentary by professional economists was being driven by a narrow partisan political agenda. See my first posting on this blog July 5, 2011.

    Like

  32. 32 Min August 6, 2014 at 9:17 am

    David Glasner:
    “Min, I think your conception of profit maximization has to be suitably adjusted to take into account of the possibility of bankruptcy.”

    Thanks, David. I missed your reply.

    To take bankruptcy into account you should include a variable for what is at stake. Then instead of profit maximization you are going to end up with maximizing the return on investment or the like. No?

    Like


  1. 1 Another Complaint about Modern Macroeconomics – UnEasy Money | Marty Investor Trackback on July 16, 2014 at 3:47 am
  2. 2 Modern macroeconomics — obscurantist axiomatics | LARS P. SYLL Trackback on July 16, 2014 at 6:52 am
  3. 3 Axiomatics — the economics fetish | LARS P. SYLL Trackback on January 18, 2016 at 11:40 am
  4. 4 Axiomatics — the economics fetish | Real-World Economics Review Blog Trackback on January 21, 2016 at 9:44 am
  5. 5 Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge | Uneasy Money Trackback on September 23, 2016 at 11:21 am

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.




About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,270 other subscribers
Follow Uneasy Money on WordPress.com