The Microfoundations Wars Continue

I see belatedly that the battle over microfoundations continues on the blogosphere, with Paul Krugman, Noah Smith, Adam Posen, and Nick Rowe all challenging the microfoundations position, while Tony Yates and Stephen Williamson defend it with Simon Wren-Lewis trying to serve as a peacemaker of sorts. I agree with most of the criticisms, but what I found most striking was the defense of microfoundations offered by Tony Yates, who expresses the mentality of the microfoundations school so well that I thought that some further commentary on his post would be worthwhile.

Yates’s post was prompted by a Twitter exchange between Yates and Adam Posen after Posen tweeted that microfoundations have no merit, an exaggeration no doubt, but not an unreasonable one. Noah Smith chimed in with a challenge to Yates to defend the proposition that microfoundations do have merit. Hence, the title (“Why Microfoundations Have Merit.”) of Yates’s post. What really caught my attention in Yates’s post is that, in trying to defend the proposition that microfounded models do have merit, Yates offers the following methodological, or perhaps aesthetic, pronouncement .

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

Paraphrasing, I would say that Yates defines merit as a useful insight or prediction into the way the world works. Fair enough. He then defines microfounded models as those models that tell an explicit story about what the agents populating the model are trying to do and the resulting outcomes of their efforts. This strikes me as a definition that includes more than just microfounded models, but let that pass, at least for the moment. Then comes the key point. These models “are the ONLY models that have anything genuinely economic to say about anything.” A breathtaking claim.

In other words, Yates believes that unless an insight, a proposition, or a conjecture, can be logically deduced from microfoundations, it is not economics. So whatever the merits of microfounded models, a non-microfounded model is not, as a matter of principle, an economic model. Talk about methodological authoritarianism.

Having established, to his own satisfaction at any rate, that only microfounded models have a legitimate claim to be considered economic, Yates defends the claim that microfounded models have merit by citing the Lucas critique as an early example of a meritorious insight derived from the “microfoundations project.” Now there is something a bit odd about this claim, because Yates neglects to mention that the Lucas critique, as Lucas himself acknowledged, had been anticipated by earlier economists, including both Keynes and Tinbergen. So if the microfoundations project does indeed have merit, the example chosen to illustrate that merit does nothing to show that the merit is in any way peculiar to the microfoundations project. It is also bears repeating (see my earlier post on the Lucas critique) that the Lucas critique only tells us about steady states, so it provides no useful information, insight, prediction or guidance about using monetary policy to speed up the recovery to a new steady state. So we should be careful not to attribute more merit to the Lucas critique than it actually possesses.

To be sure, in his Twitter exchange with Adam Posen, Yates mentioned several other meritorious contributions from the microfoundations project, each of which Posen rejected because the merit of those contributions lies in the intuition behind the one line idea. To which Yates responded:

This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

And Yates’s response is highly perplexing to me. An economic model is usually the product of some thought process intended to construct a coherent model from some mental raw materials (ideas) and resources (knowledge and techniques). The thought process is an attempt to embody some idea or ideas about a posited causal mechanism or about a posited mutual interdependency among variables of interest. The intuition is the idea or insight that some such causal mechanism or mutual interdependency exists. A model is one particular implementation (out of many other possible implementations) of the idea in a way that allows further implications of the idea to be deduced, thereby achieving an enhanced and deeper understanding of the original insight. The “microfoundations project” does not directly determine what kinds of ideas can be modeled, but it does require that models have certain properties to be considered acceptable implementations of any idea. In particular the model must incorporate a dynamic stochastic general equilibrium system with rational expectations and a unique equilibrium. Ideas not tractable given those modeling constraints are excluded. Posen’s point, it seems to me, is not that no worthwhile, meritorious ideas have been modeled within the modeling constraints imposed by the microfoundations project, but that the microfoundations project has done nothing to create or propagate those ideas; it has just forced those ideas to be implemented within the template of the microfoundations project.

None of the characteristic properties of the microfoundations project are assumptions for which there is compelling empirical or theoretical justification. We know how to prove the existence of a general equilibrium for economic models populated by agents satisfying certain rationality assumptions (assumptions for which there is no compelling a priori argument and whose primary justifications are tractability and the accuracy of the empirical implications deduced from them), but the conditions for a unique general equilibrium are way more stringent than the standard convexity assumptions required to prove existence. Moreover, even given the existence of a unique general equilibrium, there is no proof that an economy not in general equilibrium will reach the general equilibrium under the standard rules of price adjustment. Nor is there any empirical evidence to suggest that actual economies are in any sense in a general equilibrium, though one might reasonably suppose that actual economies are from time to time in the neighborhood of a general equilibrium. The rationality of expectations is in one sense an entirely ad hoc assumption, though an inconsistency between the predictions of a model, under the assumption of rational expectations, with the rationally expectations of the agents in the model is surely a sign that there is a problem in the structure of the model. But just because rational expectations can be used to check for latent design flaws in a model, it does not follow that assuming rational expectations leads to empirical implications that are generally, or even occasionally, empirically valid.

Thus, the key assumptions of microfounded models are not logically entailed by any deep axioms; they are imposed by methodological fiat, a philosophically and pragmatically unfounded insistence that certain modeling conventions be adhered to in order to count as “scientific.” Now it would be one thing if these modeling conventions were generating new, previously unknown, empirical relationships or generating more accurate predictions than those generated by non-microfounded models, but evidence that the predictions of microfounded models are better than the predictions of non-microfounded models is notably lacking. Indeed, Carlaw and Lipsey have shown that micro-founded models generate predictions that are less accurate than those generated by non-micofounded models. If microfounded theories represent scientific progress, they ought to be producing an increase, not a decrease, in explanatory power.

The microfoundations project is predicated on a gigantic leap of faith that the existing economy has an underlying structure that corresponds closely enough to the assumptions of the Arrow-Debreu model, suitably adjusted for stochastic elements and a variety of frictions (e.g., Calvo pricing) that may be introduced into the models depending on the modeler’s judgment about what constitutes an allowable friction. This is classic question-begging with a vengeance: arriving at a conclusion by assuming what needs to be proved. Such question begging is not necessarily illegitimate; every research program is based on some degree of faith or optimism that results not yet in hand will justify the effort required to generate those results. What is not legitimate is the claim that ONLY the models based on such question-begging assumptions are genuinely scientific.

This question-begging mentality masquerading as science is actually not unique to the microfoundations school. It is not uncommon among those with an exaggerated belief in the powers of science, a mentality that Hayek called scientism. It is akin to physicalism, the philosophical doctrine that all phenomena are physical. According to physicalism, there are no mental phenomena. What we perceive as mental phenomena, e.g., consciousness, is not real, but an illusion. Our mental states are really nothing but physical states. I do not say that physicalism is false, just that it is a philosophical position, not a proposition derived from science, and certainly not a fact that is, or can be, established by the currently available tools of science. It is a faith that some day — some day probably very, very far off into the future — science will demonstrate that our mental processes can be reduced to, and derived from, the laws of physics. Similarly, given the inability to account for observed fluctuations of output and employment in terms of microfoundations, the assertion that only microfounded models are scientific is simply an expression of faith in some, as yet unknown, future discovery, not a claim supported by any available scientific proof or evidence.

26 Responses to “The Microfoundations Wars Continue”

  1. 1 Blue Aurora January 2, 2014 at 8:09 pm

    Although you are not deeply familiar with the literature on decision theory, David Glasner…

    I personally think that the micro-foundations issue being discussed by academic economists (particularly, the ones that largely deal with macroeconomics) can be better resolved by replacing the under-lying decision-theoretic foundation for their economic models. Perhaps this might bring about a better and more realistic descriptive theory, but one that might also allow for better guidance in policy prescription.


  2. 2 Tony Yates January 3, 2014 at 6:31 am

    An interesting reply, thanks. A few comments.

    First, I don’t at any point say that microfounded models are the only ‘scientific’ way of doing economics. You say this a couple of times towards the end of your post and it’s not true.

    Actually, I tried to avoid the question of whether any of the different ways of doing macro are ‘scientific’ or not, as that means different things to different people, and that wasn’t the question. I was simply trying to establish whether microfundations had merit. Having merit doesn’t mean being scientific. And as I said, other ways of doing macro have merit too, and are useful in different circumstances, some of which I described. You give the impression that I argue that there is no use in other ways of doing macro, which is misleading.

    I don’t think it’s relevant whether Lucas was the first person to establish the Lucas Crritique or not when one is simply establishing in what sense non microfounded models fall down.

    You are incorrect to claim that the Lucas Critique ‘only applies to steady states’. Several papers investigate the Lucas Critique (some explicity and pointedly, some in a roundabout way), by deploying models defined in terms of deviations from a single steady state.



  3. 3 David Glasner January 3, 2014 at 9:20 am

    Blue Aurora, You are right; I am not well-versed in decision theory, so you will have to spell out what you are thinking a little bit more explicitly before I can respond.

    Tony, Thanks for your response.

    It was this statement of yours that I was referring to :

    “My own position is that these are the ONLY models that have anything genuinely economic to say about anything.”

    That sounds like a pretty categorical dismissal of non-micro-founded models. But you are right that I failed to acknowledge that you do concede that sometimes non-microfounded models may be useful, but the exceptions that you listed, atheoretical VAR models, modest policy interventions, etc., seemed to me quite limited, so I thought that I was not really misrepresenting the spirit of your position. But you are right that I was exaggerating, and should have been more careful in putting words in your mouth. But I think the statement that I quoted was a pretty strong one, so perhaps you might want to restate your position in a less categorical way.

    About the Lucas critique, I think the question is whether taking the Lucas critique into account is possible without adopting the entire microfoundations agenda. Would you mind identifying for me a couple of the papers you refer to that investigate the Lucas critique by deploying models defined in terms of deviations from a single steady state.


  4. 4 Tony Yates January 3, 2014 at 10:49 am

    I think the uses of empirical models are not limited, they are valuable. Eg Valerie Ramey’s survey of the VAR literature on the fiscal multiplier, which has been a vital part of the debate about the legitimacy of fiscal consolidation recently.

    When I say that other approaches are not ‘economic’, I’m not saying anything that surprising; just that you enter the realm of the simply statistical. You can’t any more talk about the relationships as corresponding to a notion you have about what firms or consumers are doing in response to things [with any confidence, because of the Lucas Critique]. The simply statistical still leaves you able to do stuff. And in some circumstances might be the dominant strategy. [If you are pricing bonds and just need to forecast the central bank rate, perhaps a statistical forecast model is all that you need].

    As to some examples. Well, you can get quite a few by googling. But one that wouldn’t pop up right away is by Sargent and Surico. And it’s about how the relationship between money growth and inflation is a function of the policy rule chosen. [In which case an econometric model that didn’t incorporate the required structural relationships would have time varying coefficients]. Another paper by Cogley and Sargent [‘Drifts and volatilities’]… explains by assertion only – merely because this is considered self evident to macro modellers now – how time varying parameter VARs could result from policymakers devising policy and updating their econometric models, and using rolling windows.



  5. 5 Jason January 3, 2014 at 11:06 am

    Very thoughtful post.

    I believe the Yates program implemented in analogy in thermodynamics would not have allowed for the ideal gas law (1834) to be considered “physics” until its derivation from kinetic theory in the 1850s because the microfoundations (atoms) had been already introduced in the early 1800s (by e.g. Dalton).

    But in general, the Yates program misses a major point of why people study economics at all: many of the details of your agents (parameters, mechanisms) must be subsumed into a few parameters in a macro theory in order for the whole endeavor to be tractable.

    I like to point to another example in physics. Protons are made of quarks but all of the quarks, anti-quarks and gluons and their complex interactions in the underlying theory (QCD) end up as the mass and charge of the proton — just two numbers. For low energy physics, the fact that protons are made of quarks almost never arises and the theory at that scale is an entirely different theory (e.g. Chiral Perturbation Theory) that has almost no direct relation to QCD besides being consistent with one of its symmetries and could exist independently from a philosophical point of view.


  6. 6 Tom Brown January 3, 2014 at 11:34 am

    David, nice … I wish I could follow the whole post, but you bring up some interesting questions for a layman such as myself. I’m neither a scientist nor an economist, but I especially liked your last paragraph. I don’t know that I agree with it, but I like it nonetheless. This gets at something I find deeply troubling about economics in general.

    First let me say that I’m fully on board with the reductionist/scientism/physicalism way of looking at things. Not because I want to be, but because experience has taught me it’s foolish to fight it. Science wins every time. The “God of the Gaps” has smaller and smaller living quarters all the time. It sure wasn’t metaphysics that allowed mankind to make a iPhone.

    Right off the bat you have to say that the “science” of economics is at least as complex as meteorology (and thus suffers from all the same problems: butterfly effect, etc), but now you have to add in the multitude of unpredictable feedback loops which meteorology lacks. Say you could model all the human brains and an artificial world for them to inhabit in a huge computer: the predictions such a simulation would make would be very limited at best (model uncertainties, inaccuracies, and erroneous initial conditions just some of the problems) and that’s before you consider that the existence of such a simulation would cause a billion new non-linear feedback loops to be instantly created (people using the results of the model to try to gain an advantage) thus quickly invalidating the model! You can’t have a complete model of “The Matrix” inside “The Matrix.” …

    In light of these issues, don’t you think that you economists would have better luck taking a step back and studying the “economy” of chimps in a zoo? Fewer feedback loops to deal with! … In what possible sense could chimps in a zoo be a more complex system to study scientifically?


  7. 7 Tom Brown January 3, 2014 at 11:40 am

    O/T: David, what do you think? An easy $100 prize?

    “Reward for historical example of 4x on monetary base and low-inflation
    I will pay $100 US by paypal to the first person to comment below about a case where a central bank increased the monetary base by a factor of 4 or more in 5 years or less and did not get at least double digit inflation sometime during the 5 years after. Must provide links to reliable info on central bank balance sheet expansion and subsequent inflation.”


  8. 8 merijnknibbe January 3, 2014 at 2:33 pm

    @Tom Brown The German Central Bank after the hyperinflation of the twenties might come quite close. Didn’t check it exactly. But when hyperinflation was banished and new money was introduced (sorry, other way around) people started to rebuild the cash and ‘almost cash’ positions on their balance sheets which enabled the German central bank to multiply the amount of money without inflationary consequences. Same thing in the thirties, with the “MEFO-bills”.


  9. 9 merijnknibbe January 3, 2014 at 11:57 pm

    My take on some of the conceptual and empirical flaws of so called micro-founded models:


  10. 10 W. Peden January 4, 2014 at 9:49 am

    A very interesting post!

    Mancur Olson, in “The Rise and Decline of Nations”, says that one wants macroeconomic theories to be (a) consistent with microeconomics and (b) based on reasonable assumptions about human behaviour. Of course, ‘consistency’ is an extremely weak logical relation; it’s much weaker than ‘deducibility’. For example, “Hawtrey was a man ” and “Hawtrey w as mortal” are propositions that are consistent, in that they obviously can both be true, but you can’t directly deduce the one from the other (you would need some other proposition like “All men are mortal”)

    He makes the further point that, largely due to the relative number of households/firms to investigate vs. the number of countries to investigate, microeconomic theories will almost certainly be better confirmed than macroeconomic theories, and thus if there is an inconsistency, it’s the macroeconomc theory we should change.

    That seems to me to be a reasonable way to think about microfoundations. In particular, it implies that we shouldn’t worry if a macroeconomic theory isn’t reducible (in the sense of being deducible from) to microeconomic assumptions. To carry on some analogies that people have been making with the natural sciences, it would be a worry if chemistry and biology were inconsistent (and a worry for biology if chemistry was better confirmed) but the fact that most of biology has not (yet?) been reduced to chemistry doesn’t mean that we should throw out biology.


  11. 11 David Glasner January 5, 2014 at 8:27 am

    Tony, Thanks for the clarification. I think that to say that unless you can be certain that a model is immune to the Lucas critique it is not economic is, well, extreme. I think the proper interpretation of the Lucas critique is not just to dismiss models whose coefficients may be time varying but to be aware of the possibility and to take that uncertainty into account. The models that are supposedly not subject to the the Lucas critique are subject to their own shortcomings, but we don’t necessarily dismiss them (though I would never take seriously any representative agent model in the context of macroecnomic analysis). Every model is wrong and we have to choose among models based on some assessment of their relative strengths and weaknesses.

    Jason, Thanks for providing two very nice physical analogies, though I am afraid that the second one is over my head.

    Tom, I think that your argument from experience is fallacious as was pointed out by that famous nineteenth century philosopher Samuel Clemens.

    “We should be careful to get out of an experience only the wisdom that is in it and stop there lest we be like the cat that sits down on a hot stove lid. She will never sit down on a hot stove lid again and that is well but also she will never sit down on a cold one anymore.”

    About studying chimps in a zoo, do you think chimps engage in credit transactions?

    Nonetheless, I agree with you point about the complexity of human economies. That complexity makes me very dubious about how much progress is possible in economic theory.

    I have no idea whether there is any such historical example. So what? There’s a first time for everything.

    merijinknibbe, Thanks for the link.

    W. Peden, Thanks for reminding me of Olson’s book, which I read over 30 years ago. I think the inconsistency point is really the key. When people started talking about microfoundations early on in the 1960s, what they were trying to do was find some microeconomic foundation for the price rigidity assumption that, in the neoclassical synthesis version, seemed to be necessary to deduce the Keynesian macroecnomic. That led to a variety of theories of how labor markets work, but it didn’t necessarily have to result in the kind of microfounded macroeconomic models that we have today.


  12. 12 W. Peden January 5, 2014 at 9:21 am


    I agree, and I’d add that, since microeconomics is itself subject to change, one generation’s microfoundations need not be another generation’s microfoundations. And there is no reason why an observational regularity discovered in macroeconomics can’t be a stimulus for microeconomic hypotheses.

    Olson is the first to point out the limitations of his theory in his book, but from the point of view of a philosopher it’s fascinating because Olson takes methodology VERY seriously. It’s one of few books in economics to talk about colligation of predictions! Olson discussed the points of methodology with Mark Blaug (among others) and the result gives the book a very interesting flavour.

    His summary of the flaws in various macroeconomic theories (from Keynesianism to monetarism to New Classical Macro) are very interesting as well.


  13. 13 Tom Brown January 5, 2014 at 5:53 pm

    David, thanks for your reply. I’m not sure I fully take the point of your Samuel Clemens example… I see what you’re saying, but it seems to me the situation is more along these lines: a world full of cats (both dumb and smart) and stoves, both hot and cold, and the stoves are changing states on a regular basis. If there’s any advantage to be had in leaping up on a stove when it’s cold… the cats as a whole will figure out how to tell the difference eventually (though natural selection perhaps). That’s my intuition anyway. I don’t think it’s a one cat one stove world.

    I’m in no position to make a strong argument. My argument is one of practicality: I have limited intelligence, and limited time and resources and patience, etc. I have to go with what looks best to me, and I think seriously considering anything other than science as a way to really figure out the mechanics of how something works is a waste of time. Sometimes the science isn’t there and we’re forced to do with less (e.g economics). But to seriously think that something outside of a mechanical universe is happening here just seems like a sure loser to me. Of course I have no way to prove that! But I think there’s a LOT of circumstantial evidence.

    Also, regarding my feedback loops… though I’m neither an economist nor a scientist, I do have some experience with feedback loops from engineering. They aren’t always a show stopper, but for econ I think they well might be.

    So if you’re combining estimation with control, in the very best of circumstances you can separate those two problems w/o consequence, and solve them (for optimal solutions) independently, and have an optimal system as a result. The econ problem doesn’t come close to those circumstances though.

    Regarding the $100 contest, JP Koning beat you too it, but he’s sharing the prize money with Mark A. Sadowski (and Vincent was nice enough to throw in a $20 “finder’s fee” for me too). 😀


  14. 14 David Glasner January 6, 2014 at 9:33 am

    W. Peden, Very well stated. That’s the point I was trying to make. Microfoundations, if they are worth anything, should be generating new predictions and better predictions than we were making without them. Microfoundations as now practiced are simply telling us to that we have to think about business cycles and fluctuations in GDP as optimal adjustments to changes in underlying conditions. That’s not progress, that’s just imposing your metaphysical take on reality view on everyone else.

    Tom, The point is this. You are free to think that science will eventually come up with the answer to everything if you like. But if you are basing that belief on its past success, you are indeed reasoning just like Clemens’s cat. Actually Clemens was anticipated in his insight by David Hume, when he pointed out that inductive inference is not based on logical necessity. Sometimes it works; sometimes it doesn’t. Sometimes the stove is hot; sometimes it’s not. There’s no cat that knows for sure whether the stove is a not stove or a cold stove.

    Thanks for the reference to the separation principle. And congratulations on your finder’s fee. Koning and Sadowski sure do know their stuff.


  15. 15 Philo January 6, 2014 at 11:37 am

    “According to physicalism, there are no mental phenomena.” I believe the thesis of physicalism is that there are no *non-physical* phenomena. There may be mental phenomena, but if so they *are physical*. (Of course the terms ‘mental’ and ‘physical’, and perhaps also ‘phenomenon’, call for definition.)


  16. 16 teageegeepea January 6, 2014 at 4:43 pm

    You can add David Andolfatto as a belated addition to the list, and relatively favorable toward microfoundations.


  17. 17 Tom Brown January 7, 2014 at 11:42 am

    David thanks, this has been fun. You write:

    “You are free to think that science will eventually come up with the answer to everything if you like.”

    I don’t actually think that. I just believe (dispassionately) that the best explanation for this universe (which matches the evidence I’ve seen) seems to be that it’s a machine. No one part of that universe (e.g. humans) can hope to build a model of the whole thing, so we definitely won’t figure it all out.

    Ethical issues aside though, we ought be to able to build an extremely good model of a small insignificant thing like a human brain. And beyond that, the interaction of a bunch of humans (i.e. “The Matrix”). It’s probably just a matter of a few decades away. That should revolutionize psychology, political science, and probably economics too. But it won’t “solve” them. I’m sure we’ll continue to have trouble predicting the weather and earthquakes too!

    I need to do more reading to understand the philosophical problems of my view that you bring up. But you’re a very smart and well educated guy, so I’ll take your word for it for now! 😀

    It’s difficult for someone like me to take philosophy too seriously though: science gets results, and philosophy doesn’t. Anyone can look around them and see this is true.


  18. 18 Tom Brown January 7, 2014 at 12:35 pm

    Sorry to keep on this, but I just read this at Sumner’s:

    and Sumner’s response:

    So that plays into what I’m saying a bit. I underwent my own Bayesian shift from “it looks like part machine and part mysterious funny business” to “nope, it just seems to be a straight up machine.” I guess my priors on the “funny business” component weren’t that strong: I must have suspected that some sources were charlatans, or were emotionally invested (thus giving them a strong confirmation bias)!… or at least had nothing concrete to show for all their efforts and heartfelt beliefs.


  19. 19 David Glasner January 8, 2014 at 5:04 pm

    Philo, You may be right on the technical definition. I hope my definition was not misleading. My view on definitions (I think taken from Karl Popper) is that you only need to define terms when the lack of a definition could lead to misunderstanding or ambiguity.

    teageegeepea, Thanks for the link. From what little I know about Andolfatto, I can’t say that I’m surprised, though he seems like a very pleasant sort of guy.

    Tom, Thank you. Isn’t that what I’m here for?

    Do you think that you are a machine? I don’t. And if I find out that you are, I won’t let you comment any more on this blog. Besides, do you think a machine would really know what fun is?

    Obviously I don’t think that we will ever be able to build a model of the human brain. If I knew more philosophy, I would explain to you why Godel’s Theorem means that it is impossible for a human being to build a model of his/her own brain. But don’t take my word for it.

    As for results, sometimes negative results are also important.


  20. 20 Tom Brown January 10, 2014 at 12:38 pm

    David I agree with Godel’s theorem (I think… I’m not expert). That’s kind of what I mean by The Matrix can’t contain a complete model of The Matrix (I’m referring to the movie of course).

    However, that doesn’t mean that we human’s can never build a functionally equivalent human brain from non-brain material. No one human (w/o the help of other machines) may be able to understand fully how the brain operates, but he can built it and study it’s constituent parts. And who knows: machines we build may be able to fully understand one of our puny human brains. But how could we build such machines? I believe that’s just another avenue for evolution.

    Here’s a short story to illustrate what i mean: Say your inner ear nerve cells die and you lose your hearing. Well fortunately for you, you can replace those nerves with functionally equivalent synthetic ones (a cochlear implant). Now you can hear again! All your other nerve cells are happy and completely fooled by the sensation (I realize that’s not quite true: cochlear implants are not perfect).

    Theoretically, what’s to stop that process? Now replace the next nerve cells up the chain with functionally equivalent ones, etc. Eventually your entire organic brain is gone, completely replaced by synthetic nerves, and if done carefully enough, you’re still there! But now copies can be made of you. You can be frozen in time. You can be “upgrated.” etc.

    Where’s the problem in that continuum? How have you not become a machine? Doesn’t that imply you were a machine to start off with?

    It’s not necessary for us to understand how the full brain works: just all of it’s constituent parts, and their interconnections.


  21. 21 Tom Brown January 10, 2014 at 12:42 pm

    … I guess what I’m saying is:

    Since we’ve already proven that some nerves can be replaced and you’re still you… then why not all of them?


  22. 22 David Glasner January 17, 2014 at 9:03 am

    Tom, I don’t say that it’s not possible to build something resembling the human brain, though I wouldn’t call an organ capable of creating, just to offer a single example, Beethoven’s third symphony, puny. Nothing puny about that composition or the brain that composed it.

    You propose an interesting thought experiment. Do you think that the memories stored in the original brain would be retained by the synthetic brain? What do you think would happen when the synthetic brain went to sleep? Would it dream?


  1. 1 Input-output analysis as an alternative to ‘micro-founded (not!)’ models | Real-World Economics Review Blog Trackback on January 3, 2014 at 11:55 pm
  2. 2 Methodological Arrogance | Uneasy Money Trackback on February 26, 2014 at 8:35 pm
  3. 3 Explaining the Hegemony of New Classical Economics | Uneasy Money Trackback on September 30, 2014 at 9:24 pm
  4. 4 Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics | Uneasy Money Trackback on May 19, 2019 at 9:15 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on

%d bloggers like this: