Archive for the 'intertemporal budget constraint' Category

Roy Radner and the Equilibrium of Plans, Prices and Price Expectations

In this post I want to discuss Roy Radner’s treatment of an equilibrium of plans, prices, and price expectations (EPPPE) and its relationship to Hayek’s conception of intertemporal equilibrium, of which Radner’s treatment is a technically more sophisticated version. Although I seen no evidence that Radner was directly influenced by Hayek’s work, I consider Radner’s conception of EPPPE to be a version of Hayek’s conception of intertemporal equilibrium, because it captures essential properties of Hayek’s conception of intertemporal equilibrium as a situation in which agents independently formulate their own optimizing plans based on the prices that they actually observe – their common knowledge – and on the future prices that they expect to observe over the course of their planning horizons. While currently observed prices are common knowledge – not necessarily a factual description of economic reality but not an entirely unreasonable simplifying assumption – the prices that individual agents expect to observe in the future are subjective knowledge based on whatever common or private knowledge individuals may have and whatever methods they may be using to form their expectations of the prices that will be observed in the future. An intertemporal equilibrium refers to a set of decentralized plans that are both a) optimal from the standpoint of every agent’s own objectives given their common knowledge of current prices and their subjective expectations of future prices and b) mutually consistent.

If an agent has chosen an optimal plan given current and expected future prices, that plan will not be changed unless the agent acquires new information that renders the existing plan sub-optimal relative to the new information. Otherwise, there would be no reason for the agent to deviate from an optimal plan. The new information that could cause an agent to change a formerly optimal plan would either affect the preferences of the agent, the technology available to the agent, or would somehow be reflected in current prices or in expected future prices. But it seems improbable that there could be a change in preferences or technology would not also be reflected in current or expected future prices. So absent a change in current or expected future prices, there would seem to be almost no likelihood that an agent would deviate from a plan that was optimal given current prices and the future prices expected by the agent.

The mutual consistency of the optimizing plans of independent agents therefore turns out to be equivalent to the condition that all agents observe the same current prices – their common knowledge – and have exactly the same forecasts of the future prices upon which they have relied in choosing their optimal plans. Even should their forecasts of future prices turn out to be wrong, at the moment before their forecasts of future prices were changed or disproved by observation, their plans were still mutually consistent relative to the information on which their plans had been chosen. The failure of the equilibrium to be maintained could be attributed to a change in information that meant that the formerly optimal plans were no longer optimal given the newly acquired information. But until the new information became available, the mutual consistency of optimal plans at that (fleeting) moment signified an equilibrium state. Thus, the defining characteristic of an intertemporal equilibrium in which current prices are common knowledge is that all agents share the same expectations of the future prices on which their optimal plans have been based.

There are fundamental differences between the Arrow-Debreu-McKenzie (ADM) equilibrium and the EPPPE. One difference worth mentioning is that, under the standard assumptions of the ADM model, the equilibrium is Pareto-optimal, and any Pareto-optimum allocation, by a suitable redistribution of initial endowments, could be achieved as a general equilibrium (two welfare theorems). These results do not generally hold for EPPPE, because, in contrast to the ADM model, it is possible for agents in EPPPE to acquire additional information over time, not only passively, but by investing resources in the production of information. Investing resources in the production of information can cause inefficiency in two ways: first, by creating non-convexities (owing to start-up costs in information gathering activities) that are inconsistent with the uniform competitive prices characteristic of the ADM equilibrium, and second, by creating incentives to devote resources to produce information whose value is derived from profits in trading with less well-informed agents. The latter source of inefficiency was discovered by Jack Hirshleifer in his classic 1971 paper, which I have written about in several previous posts (here, here, here, and here).

But the important feature of Radner’s EPPPE that I want to emphasize here — and what radically distinguishes it from the ADM equilibrium — is its fragility. Unlike the ADM equilibrium which is established once and forever at time zero of a model in which all production and consumption starts in period one, the EPPPE, even if it ever exists, is momentary, and is subject to unraveling whenever there is a change in the underlying information upon which current prices and expected future prices depend, and upon which agents, in choosing their optimal plans, rely. Time is not just, as it is in the ADM model, an appendage to the EPPPE, and, as a result, EPPPE can account for many phenomena, practices, and institutions that are left out of the ADM model.

The two differences that are most relevant in this context are the existence of stock markets in which shares of firms are traded based on expectations of the future net income streams associated with those firms, and the existence of a medium of exchange supplied by private financial intermediaries known as banks. In the ADM model in which all transactions are executed in time zero, in advance of all the actual consumption and production activities determined by those transactions, there would be no reason to hold, or to supply, a medium of exchange. The ADM equilibrium allows for agents to borrow or lend at equilibrium interest rates to optimize the time profiles of their consumption relative to their endowments and the time profiles of their earnings. Since all such transactions are consummated in time zero, and since, through some undefined process, the complete solvency and the integrity of all parties to all transactions is ascertained in time zero, the probability of a default on any loan contracted at time zero is zero. As a result, each agent faces a single intertemporal budget constraint at time zero over all periods from 1 to n. Walras’s Law therefore holds across all time periods for this intertemporal budget constraint, each agent transacting at the same prices in each period as every other agent does.

Once an equilibrium price vector is established in time zero, each agent knows that his optimal plan based on that price vector (which is the common knowledge of all agents) will be executed over time exactly as determined in time zero. There is no reason for any exchange of ownership shares in firms, the future income streams from each firm being known in advance.

The ADM equilibrium is a model of an economic process very different from Radner’s EPPPE, because in EPPPE, agents have no reason to assume that their current plans, even if they are momentarily both optimal and mutually consistent with the plans of all other agents, will remain optimal and consistent with the plans of all other agents. New information can arrive or be produced that will necessitate a revision in plans. Because even equilibrium plans are subject to revision, agents must take into account the solvency and credit worthiness of counterparties with whom they enter into transactions. The potentially imperfect credit-worthiness of at least some agents enables certain financial intermediaries (aka banks) to provide a service by offering to exchange their debt, which is widely considered to be more credit-worthy than the debt of ordinary agents, to agents seeking to borrow to finance purchases of either consumption or investment goods. Many agents seeking to borrow therefore prefer exchanging their debt for bank debt, bank debt being acceptable by other agents at face value. In addition, because the acquisition of new information is possible, there is a reason for agents to engage in speculative trades of commodities or assets. Such assets include ownership shares of firms, and agents may revise their valuations of those firms as they revise their expectations about future prices and their expectations about the revised plans of those firms in response to newly acquired information.

I will discuss the special role of banks at greater length in my next post on temporary equilibrium. But for now, I just want to underscore a key point: in the EPPE, unless all agents have the same expectations of future prices, Walras’s Law need not hold. The proof that Walras’s holds depends on the assumption that individual plans to buy and sell are based on the assumption that every agent buys or sells each commodity at the same price that every other transactor buys  or sells that commodity. But in the intertemporal context, in which only current, not future prices, are observed, plans for current and future prices are made based on expectations about future prices. If agents don’t share the same expectations about future prices, agents making plans for future purchases based on overly optimistic expectations about the prices at which they will be able to sell, may make commitments to buy in the future (or commitment to repay loans to finance purchases in the present) that they will be unable to discharge. Reneging on commitments to buy in the future or to repay obligations incurred in the present may rule out the existence of even a temporary equilibrium in the future.

Finally, let me add a word about Radner’s terminology. In his 1987 entry on “Uncertainty and General Equilibrium” for the New Palgrave Dictionary of Economics, (Here is a link to the revised version on line), Radner writes:

A trader’s expectations concern both future environmental events and future prices. Regarding expectations about future environmental events, there is no conceptual problem. According to the Expected Utility Hypothesis, each trader is characterized by a subjective probability measure on the set of complete histories of the environment. Since, by definition, the evolution of the environment is exogenous, a trader’s conditional probability of a future event, given the information to date, is well defined.

It is not so obvious how to proceed with regard to trader’s expectations about future prices. I shall contrast two possible approaches. In the first, which I shall call the perfect foresight approach, let us assume that the behaviour of traders is such as to determine, for each complete history of the environment, a unique corresponding sequence of price system[s]. . .

Thus, the perfect foresight approach implies that, in equilibrium, traders have common price expectation functions. These price expectation functions indicate, for each date-event pair, what the equilibrium price system would be in the corresponding market at that date event pair. . . . [I]t follows that, in equilibrium the traders would have strategies (plans) such that if these strategies were carried out, the markets would be cleared at each date-event pair. Call such plans consistent. A set of common price expectations and corresponding consistent plans is called an equilibrium of plans, prices, and price expectations.

My only problem with Radner’s formulation here is that he is defining his equilibrium concept in terms of the intrinsic capacity of the traders to predict prices rather the simple fact that traders form correct expectations. For purposes of the formal definition of EPPE, it is irrelevant whether traders predictions of future prices are correct because they are endowed with the correct model of the economy or because they are all lucky and randomly have happened simultaneously to form the same expectations of future prices. Radner also formulates an alternative version of his perfect-foresight approach in which agents don’t all share the same information. In such cases, it becomes possible for traders to make inferences about the environment by observing prices differ from what they had expected.

The situation in which traders enter the market with different non-price information presents an opportunity for agents to learn about the environment from prices, since current prices reflect, in a possibly complicated manner, the non-price information signals received by the various agents. To take an extreme example, the “inside information” of a trader in a securities market may lead him to bid up the price to a level higher than it otherwise would have been. . . . [A]n astute market observer might be able to infer that an insider has obtained some favourable information, just by careful observation of the price movement.

The ability to infer non-price information from otherwise inexplicable movements in prices leads Radner to define a concept of rational expectations equilibrium.

[E]conomic agents have the opportunity to revise their individual models in the light of observations and published data. Hence, there is a feedback from the true relationship to the individual models. An equilibrium of this system, in which the individual models are identical with the true model, is called a rational expectations equilibrium. This concept of equilibrium is more subtle, of course, that the ordinary concept of equilibrium of supply and demand. In a rational expectations equilibrium, not only are prices determined so as to equate supply and demand, but individual economic agents correctly perceive the true relationship between the non-price information received by the market participants and the resulting equilibrium market prices.

Though this discussion is very interesting from several theoretical angles, as an explanation of what is entailed by an economic equilibrium, it misses the key point, which is the one that Hayek identified in his 1928 and (especially) 1937 articles mentioned in my previous posts. An equilibrium corresponds to a situation in which all agents have identical expectations of the future prices upon which they are making optimal plans given the commonly observed current prices and the expected future prices. If all agents are indeed formulating optimal plans based on the information that they have at that moment, their plans will be mutually consistent and will be executable simultaneously without revision as long as the state of their knowledge at that instant does not change. How it happened that they arrived at identical expectations — by luck chance or supernatural powers of foresight — is irrelevant to that definition of equilibrium. Radner does acknowledge that, under the perfect-foresight approach, he is endowing economic agents with a wildly unrealistic powers of imagination and computational capacity, but from his exposition, I am unable to decide whether he grasped the subtle but crucial point about the irrelevance of an assumption about the capacities of agents to the definition of EPPPE.

Although it is capable of describing a richer set of institutions and behavior than is the Arrow-Debreu model, the perfect-foresight approach is contrary to the spirit of much of competitive market theory in that it postulates that individual traders must be able to forecast, in some sense, the equilibrium prices that will prevail in the future under all alternative states of the environment. . . .[T]his approach . . . seems to require of the traders a capacity for imagination and computation far beyond what is realistic. . . .

These last considerations lead us in a different direction, which I shall call the bounded rationality approach. . . . An example of the bounded-rationality approach is the theory of temporary equilibrium.

By eschewing any claims about the rationality of the agents or their computational powers, one can simply talk about whether agents do or do not have identical expectations of future prices and what the implications of those assumptions are. When expectations do agree, there is at least a momentary equilibrium of plans, prices and price expectations. When they don’t agree, the question becomes whether even a temporary equilibrium exists and what kind of dynamic process is implied by the divergence of expectations. That it seems to me would be a fruitful way forward for macroeconomics to follow. In my next post, I will discuss some of the characteristics and implications of a temporary-equilibrium approach to macroeconomics.

 

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com