economicus.ru
 Economicus.Ru » »


(1912-)
Milton Friedman
 
: Journal of Post Keynesian Economics, Fall94, Vol. 17 Issue 1, p45, 24p
Montgomery, Michael R.
FULLY INARTICULATE MODEL ECONOMICS: OR, DOES MATH EQUAL MACRO?
[P]rogress in economic thinking means getting better and better abstract, analogue economic models, not better verbal observations about the world. [Lucas, 1980]
[We consider an infinite-horizon, overlapping-generations model of agents who live for two periods. A new technology appears in every period. . . . A given set of technologies . . . is available in period 0 . . . [T]he production function f has constant returns to scale. . . . A constant population of agents is born every period. These agents live for two periods. . . . The old generation in period 0 cares only about current consumption. We allow individuals of a given generation to borrow and lend among themselves. . .Agents can work in only one vintage in each period of their lives. . .The distribution of old agents in period 0 is given . . . .Only experienced old workers in a particular vintage can supply the skilled labor input in that vintage. We allow experienced workers to move freely. [Chari and Hopenhayn, 1991]
The above quotations neatly capture the warp and woof of a methodological movement rapidly coming to dominate the macroeconomic mainstream: a mushrooming monoculture emphasizing the "Fully Articulated Model Economy" (FAME)[1] as the essential tool of macroeconomic science.[2] "Model economies" have six defining features: (1) they are "analogue economies," to be "distinguished as sharply as possible in discussion from actual economies" (Lucas, 1980, p. 271); (2) they are based on a new conception of theory, "not a collection of assertions about the behavior of the actual economy but rather an explicit set of instructions for building a parallel or analogue system--a mechanical, imitation economy" (Lucas, 1980, p. 272); (3) their worth is determined by their ability to "simulate" or "mimic" the behavior of the actual economy: "A 'good' model . . . will not be exactly more 'real' than a poor one, but will provide better imitations" (Lucas, 1980, p. 272); (4) the language of science is mathematics,[3] and the "mechanical, imitation economy" is to be stated in mathematical form[4]; (5) models must be mathematically solvable to be useful--assumptions are justified primarily (if indeed not solely) from the standpoint of whether or not they allow the model to be solved mathematically[5]; (6) as a corollary, there is a pronounced hostility to, and denigration of, so-called "verbal" (i.e., conceptually oriented) analysis of economic problems.[6]
These assertions constitute a dramatic break from the history of economic thought. They represent a marked escalation of a makeover of economics into a branch of engineering[7] (at least in terms of methods). The self-assured very radicalness of the FAME movement strongly suggests that FAME rests on firm methodological foundations. One might presume that a carefully laid out, intricately argued methodological tract supporting FAME had been written, systematically laying down the ("no doubt overwhelming") case for such a remaking--not just of macroeconomic theory--but, indeed, a remaking of what the very phrase "macroeconomic theory" means. In fact, no such tracts exist.[8] And here is where an examination of the FAME movement must begin, rather than with an attempt to assess specific aspects of that movement's perspective on what constitutes macroeconomic "science." It is not that the case for FAMEs is poorly argued; it is that there is no argument, if by "argument" is meant a carefully developed, systematically reasoned justification, and not mere "opinion or belief" (Lucas, 1980; see note 6, above).[9] This is a peculiar state of affairs in any case, but it is particularly so for a school of thought that values "explicitness" and "rigor" seemingly above all else.[10] It is doubly peculiar behavior for a school of thought that has engaged in so much criticism of Keynesianism on methodological grounds (e.g., Lucas, 1980; Lucas and Sargent, 1981; Plosser, 1989). If FAME advocates believe that their movement is at long last making macroeconomics truly "scientific," then where is the carefully detailed methodological justification for their definition, and practice, of "science?"[11] The ratio-hale is far from immediately obvious. How are the assumptions built into FAME models (typified by the second quote introducing this article) justified? Until reasoned, systematic argument replaces assertion by authority, why should the profession accept what are essentially the opinions of FAME enthusiasts as the newly established methodological norm (assuming, of course, that the existence of an academic pecking order is not to be taken as sufficient reason in itself[12])?
The (nearly) full inarticulateness of the "model economists" on methodology makes it necessary to try to state what their position seems to be before one can even begin to evaluate it. Section 1 develops the methodology implicit in the FAME approach. Contrary to the likely opinions of many FAME advocates, it is not true that "it's all in Friedman (1953)" (section 2). Section 3 focuses on the trivialization of macroeconomic theory implicit in a movement that seeks to justify assumptions mainly on grounds of mathematical expediency.
This article is not an attack on mathematical formalism per se. Rather, it seeks to lay bare the weakness of the FAME case for making such formalism a necessary (and, increasingly, it almost seems, a sufficient) condition for something to be called "good (macro)economic science."
1. Filling a vacuum: the methodology of FAME economists(?)
There is no systematic statement of the FAME methodology. There are a number of scattered epigrams by FAME advocates from which, with some effort (and some risk of error), the perspective of the FAME movement can be implied.[13] If the school's leaders applied their standards of "explicitness" to articulating their fundamental principles, the core of their argument would be summarized in the following seven statements[14]:
1. Primacy of model building to science: Science happens when people make predictions based on a theory and test them against the real world. The first step of science is theory construction. There is no such thing as a "realistic theory" the phrase is a contradiction in terms.[15] The first, essential step to evaluating (perhaps even to constructing) a theory is to "build" a model within which the theory can be assessed. But assessed how?
2. Prediction the only test of a theory: There is only one reliable way to assess a theory: to compare its "fruits"--its predictions--with reality. Science is essentially a "back-and-forth" dialog between theory (model), prediction, comparison with reality, revised theory (model) and prediction, return to reality, and the like. This is how knowledge grows--it is an iterative process.[16]
3. Clarity of statement the key to useful prediction: Given this iterative process, a key necessary condition for science to progress is that useful, testable (contradictable) propositions emerge from theory (models). This process is immensely aided if: First, the theory is stated with maximum precision so that implications are as transparent as possible; and, second, resulting predictions are stated with similar clarity so that how to test the predictions is as clear as possible.[17] Irrelevant issues may distract the investigator if a theory is stated "fuzzily." Confusion about a theory's implications may cause the investigator to perform the wrong test. The logical chain leading from theory (model) to testable implication to critical test requires an explicit and clearly stated model.
4. Mathematics the key to clarity of statement: A mathematical statement is the most precise, explicit, and rigorous statement possible. An overwhelming majority of economists believe the precision and economy of mathematical expression is unrivaled. Mathematical "language" performs the great service to science of linguistic cleansing and streamlining.[18] By eliminating ambiguity, mathematical forms of expression promote and speed the scientific process.[19] The "elegance" of mathematics that mathematically inclined economists praise is a virtue not for its own sake but because of its utility.[20] In science, mistakes occur regularly. Mathematical expression offers strong safeguards against misstatement and error.[21]
5. Lack of clarity fatal flaw of "verbal" economics: The history of "literary" or "verbal" economics demonstrates the grave limitations of a nonmathematical approach. While it cannot be denied that advancements have been made, the long history of "verbal" economics is mainly one of equivocation. People slip in hidden assumptions by not being "explicit" and "rigorous" about their starting points.[22] For example, the first attempts to "carefully" model expectations quickly led to a revolution in macroeconomics that is still playing itself out today.[23] Precise, mathematical expression of the problem of expectations formation forced economists to discover hidden, irrational assumptions in then-standard theory, which in turn led to new theories, new discoveries, and major advances in the field. Mathematical-modeling technologies have advanced to where economists no longer need to rely on wispy, inconclusive, and essentially obsolete nonmathematiced modes of investigation to do macroeconomics.[24]
6. Standards of useful (mathematical) model building: Not all mathematical models are equally useful to the scientific process. What are the standards of "useful" model building? How do we know when a model has been constructed so as to be potentially useful?
6a. Not "realism": all models "unreal": Usefulness certainly does not depend on the badly overworked concept of"reality." All models are abstractions, "patently unreal."[25] A model cannot be "tested" on the basis of the "reality" of its assumptions: a model's assumptions are "unrealistic" by definition. Far from being a flaw, such "unreality" is an essential trait of a useful model. Nothing contributes more to the worth of a model than the degree of its "unreality" (abstractness).
6b. Choosing useful abstractions: sound "microfoundations"? How do we distinguish between "good" and "bad" abstractions? Economists prefer assumptions with clear choice-theoretic foundations involving rationality, optimization, and the like.
6c. But prediction still sole fundamental criterion: Strictly speaking, however, "sound microfoundations" is not a methodologically valid criterion.[26] The only way to evaluate a theory (model) is to derive its implications and compare it with the real world. The only proof of the pudding is in the eating. "The more dimensions on which the model mimics the answers actual economies give to simple questions, the more we trust its answers to harder questions" (Lucas, 1980, p. 272).
6d. A "useful" abstraction is defined as one yielding "useful" predictions: Thus, there is a single fundamental standard for evaluating a model ex ante--does it speed the scientific process? That is, does it yield precise, testable, interesting predictions? Nothing else matters: in particular, the (non)issue of the "realism" of assumptions is to be rejected as a criticism without content (since all assumptions are "false" or "unreal" by definition).
7. All of which leads to FAME: In sum, "precise, testable predictions" are both essential to science and most fruitfully generated through mathematical methods. The "realism" of a model's assumptions is a useless, silly notion. It therefore follows that assumptions should be chosen with the idea of arriving at precise, mathematically stated predictions that can then be evaluated via comparison with real-world events. This is precisely the research strategy underlying the "Fully Articulated Model Economy."
Thus Articulateth Minnesota (or thus mighteth Minnesota articulate if someone could only talk them into it). Once "explicitly" stated, the reader probably notices a few loose ends. What are the tradeoffs imposed when all theories must be developed and stated "rigorously?" Is the "efficiency" of the mathematical method a pure gain? Are there no hidden assumptions implicit in mathematical statements? Do the advantages of mathematical analysis in some microeconomic problems carry over wholesale into macroeconomic theory? Examination of these points (and many others) so worthy of careful examination and so thoroughly unexamined in the economic "mainstream"--must await another day.[27]
The more modest objective of this article is to focus on two key questions: (1) Is there an essential premise of the FAME movement, without which that movement cannot sustain itself? (2) How persuasive are the rationales for such a premise? A focus on these issues leads not to mathematical chauvinism per se but instead to the precondition that makes mathematical chauvinism plausible: specifically, to the view that, since "all assumptions are 'unreal,' " we might as well choose them so as to make problems amenable to a "superbly efficient" formal (mathematical) treatment. This view--and its adverse impact on present-day macroeconomics--is examined below.
2. FAME and Friedman
Where in the broader literature on economics methodology is one to find arguments from which one might deduce a supportive case for FAME? Milton Friedman's "The Methodology of Positive Economics" (1953) is a natural starting point. While a cynical observer might plausibly proclaim the complete demise of methodological inquiry within the modern mainstream, an optimist would merely argue that Friedman's article is the only methodological piece read anymore by mainstream economists. For the ambitious young practitioneer, Friedman's article contains all the answers to all the questions on methodology that any budding FAME economist will ever need.[28]
Or does it? Economists with FAME leanings are prone to use Friedman's methodological writings aphoristically, "as Dennis Robertson did Lewis Carroll's" (Lucas, 1980, p. 276).[29] Kim (1988) is representative of the now-standard view of Friedman's methodological position:
[Tobin's] criticisms are leveled mainly at the EBCT's assumptions. . . . If one follows Friedman's (1953) famous instrumentalistic methodology, what matters is not assumptions but predictions. Thus, assuming that equilibrium business cycle theorists accept the Friedman methodology, it would not be a direct criticism of the EBCT that its assumptions are unrealistic. [Kim, 1988, pp. 8-9, emphasis added]
Is there an argument for FAME in Friedman's methodology paper? While a cursory reading of Friedman supports such an interpretation, a jarring note to the contrary is Professor Friedman's own research record.[30] In his theoretical works (e.g., Friedman, 1969, 1971), Friedman goes to extraordinary lengths to justify his assumptions, not by reference to whether these generate "useful" predictions, but rather by whether they capture the essential aspects of a problem while abstracting from as many nonessential aspects of it as possible. Does Friedman contradict his own paper's methodological advice in practice, or are we getting only a portion--the conveniently FAME-affirming portion--of his argument?
"The Methodology of Positive Economics" reveals that the latter proposition is closer to the truth.[31] Friedman does make strong statements--especially early in his essay--that clearly can be taken as offering support for a FAME approach to macroeconomics. For example:
[A] theory cannot be tested by comparing its "assumptions" directly with "reality." Indeed, there is no meaningful way in which this can be done. Complete "realism" is clearly unattainable, and the question whether a theory is realistic "enough" can be settled only by seeing whether it yields predictions that are good enough for the purpose in hand or that are better than predictions from alternative theories. [Friedman, 1953, pp. 40-41]
This suggests that a theory can be "tested" only by running a "critical experiment,"[32] an ex post standard for evaluating a theory. There is (according to this standard) no fully satisfactory way to assess a theory before the key test is run.
The problem is that elsewhere in his paper Friedman introduces a second, competing ex ante standard by which assumptions can be evaluated (although Friedman does not develop his argument to the point where he creates a clear-cut competitor to his expost standard for assessing a theory's predictive power). This ex ante standard furnishes the means for evaluating a theory prior to testing its predictive power. It involves what Maki has labeled an "essentialist" criterion, a concern with whether a theory captures the "important element[s]" of the problem at hand.[33] The ex ante standard is introduced early (Friedman, 1953, p. 7), but the critical passage[34] fleshing out the idea is in a section about the "ideal types" of perfect competition and monopoly:
In analyzing the world as it is, Marshall constructed the hypothesis that, for many problems, finns could be grouped into "industries" such that the similarities among the firms in each group were more important than the differences among them. These are problems in which the important element is that a group of firms is affected alike by some stimulus. . . .
But this will not do for all problems: the important element for these may be the differential effect on particular firms.
The abstract model corresponding to this hypothesis contains two "ideal" types of firms: atomistically competitive firms, grouped into industries, and monopolistic firms. . . .
. . . The ideal types are not intended to be descriptive; they are designed to isolate the features that are crucial for a particular problem. . . . Everything depends on the problem; there is no inconsistency in regarding the same firm as if it were a perfect competitor for one problem, and a monopolist for another. [Friedman, 1953, pp. 35-36]
Does a standard judging theories (models) by whether they "isolate the features that are crucial for a particular problem" conflict with a standard saying that theories are to be evaluated solely by whether they yield "predictions that are good enough for the purpose in hand?" Friedman does not think so:
What is the criterion by which to judge whether a particular departure from realism is or is not acceptable? Why is it more "unrealistic" in analyzing business behavior to neglect the magnitude of businessmen's costs than the color of their eyes? The obvious answer is because the first makes more difference to business behavior than the second; but there is no way of knowing that this is so simply by observing that businessmen do have costs of different magnitudes and eyes of different color. Clearly it can only be known by comparing the effect on the discrepancy between actual and predicted behavior of taking the one factor or the other into account. [Friedman, 1953, pp. 32-33]
Do economists, however, really approach the "theorizing" process completely tabula rasa? Or do we begin with some prior facts about the problem under investigation, facts that can guide us in the selection of the body of assumptions making up a particular theory (model)?[35] Is it true that "the proof of the pudding is in the eating," even if we know the pudding is made of mud, bat dung, strychnine, and arsenic? Or is it permitted for culinary scientists to draw some inferences based on what we already know about these substances? Clearly there are circumstances where the question of what the ingredients "should be" is in dispute; for example, if the proposed pudding is to be made out of ginger and self-rising flour, milk but no eggs, we might reasonably await the verdict of the oven. But should the fact that there are some circumstances where a critical test needs to be performed before conclusions are drawn lead one to conclude that all circumstances fall into this category?
Suppose I construct an economic theory that assumes that firms strive to maximize their total costs, or that people form expectations about the future exclusively by consulting their Ouija boards. How--that is, by what standard--are we to evaluate such a theory? Do we, acting as methodologically principled economists and "careful social scientists," defer judgment while awaiting the running of the "critical experiment?" Of course not, but why not? Because we know in advance that any implications coaxed from such a model, while perhaps fascinating and ingeniously derived, apply only to a world where people systematically throw their money away and where people don't care about (or can't even begin to figure out) the future. Implications pertaining to such a world are not particularly interesting to social scientists (if they seek to understand the world we live in).
It is not that these assumptions are "unrealistic"--rather, it is the kind of "unrealism" exhibited by the model that is the problem. The model systematically "abstracts" from characteristics that are essential to economics as such--strictly speaking, there is no "economics" consistent with these assumptions. We require no "critical experiment" in order to decisively reject such a framework: It is hopelessly "unreal," utterly inappropriate for analyzing the actual consequences of human action in the real economy.
Friedman would respond that such an evaluation process is not inconsistent with his methodology:
[W]hat are called the assumptions of a hypothesis can be used to get some indirect evidence on the acceptability of the hypothesis in so far as the assumptions can themselves be regarded as implications of the hypothesis, and hence their conformity with reality as a failure of some implications to be contradicted, or in so far as the assumptions may call to mind other implications of the hypothesis susceptible to casual empirical observation. [Friedman, 1953, p. 28]
But to conjure up a connection between so-called "indirect evidence" and the body of the theory itself ("the implications of the hypothesis") is purely arbitrary; what we really have here is an attempt to finesse the thorny fact that prior knowledge of a general type can be used, under some conditions, to pass judgment on a theory and its assumptions before the critical test is run. Friedman hedges in this direction, but ultimately he backs away and assigns such means of evaluating theories to a definitely inferior status. In a passage[36] concerning how prior experience regarding the success of the self-interest assumption causes economists and sociologists to differ, Friedman argues:
Of course, neither the evidence of the economist nor that of the sociologist is conclusive. The decisive test is whether the hypothesis works for the phenomena it purports to explain. But a judgment may be required before any satisfactory test of this kind has been made, and, perhaps, when it cannot be made in the near future, in which case, the judgment will have to be based on the inadequate evidence available. [Friedman 1953, p. 30]
This attempt to reconcile things is not successful. Friedman's standard, stripped of the protective camouflage of this particular example and applied generally, would imply that, before passing final ("scientific") judgment on any theory, we must first perform a critical test on it. But the same thing holds for its billions of cousins that also can be strung together by making various arbitrary assumptions (e.g., "human beings seek to maximize their marble collections," "human beings seek to maximize the number of miles they travel in their lifetimes," etc.). Only in a world where time was literally unlimited could such a standard be seriously contemplated.[37]
The point is not to debate Friedman, but rather to show that FAME methods cannot be strictly supported from Friedman's essay. Friedman does believe that, ultimately, the proper methodological standard is whether "the hypothesis works" and that "a theory cannot be tested by its assumptions." But it is equally clear that Friedman believes it is admissible, and indeed of great importance, to take seriously the question of the "limits of validity" of a theoretical construct. Indeed, this point is stressed in the conclusion to his paper:
[U]ndue emphasis on the descriptive realism of "assumptions" has contributed to neglect of the critical problem of determining the limits of validity of the various hypotheses that together constitute the existing economic theory in these areas. The abstract models corresponding to these hypotheses have been elaborated in considerable detail and greatly improved in rigor and precision. . . . But, if we are to use effectively these abstract models . . . , we must have a comparable exploration of the criteria for determining what abstract model it is best to use for particular kinds of problems, what entities in the abstract model are to be identified with what observable entities, and what features of the problem or of the circumstances have the greatest effect on the accuracy of the predictions yielded by a particular model or theory. [Friedman, 1953, p. 42]
Friedman does not state it explicitly, but it is transparent that no serious attempt to identify "entities in the abstract model" with "observable entities" in the actual economy can be carried out without raising questions pertaining to the appropriateness of assumptions--which aspects of reality they are capturing, which aspects they are assuming away, and whether a framework is a conceptually appropriate vehicle for addressing a given problem.[38] This methodological perspective, however, is anathema to the FAME movement, where models typically are introduced by a lengthy, disjointed series of undefended (and usually undiscussed) assumptions (e.g., the second quote opening this article), with minimal attempt to link them to attributes of the actual economy. The defense of such procedures on grounds that "what matters is not assumptions but predictions" may reflect the opinions of FAME leaders. But no justification for such methods can be found in Friedman's "Methodology of Positive Economics."[39]
3. FAME's fruits
The lack of a methodological foundation for the FAME approach does not prevent advocates from making extravagant claims about the value of their revolution to the cause of "scientific macroeconomics." Indeed, it is very clear that FAME practitioners regard their methods as doing for macroeconomics approximately what the discovery of calculus did for physics; that is, they regard them as the essential prerequisites for the proper pursuit of macroeconomic truth. Nowhere is this more clear than in several pieces designed to popularize the FAME research strategy. For example, Prescott begins a prominent article of this genre as follows:
Economists have long been puzzled by the observations that during peacetime industrial market economies display recurrent, large fluctuations in output and employment over relatively short time periods. . . . These observations are considered puzzling because the associated movements in labor's marginal product are small.
These observations should not be puzzling, for they are what standard economic theory predicts. . . . Moreover, standard theory also correctly predicts the amplitude of these fluctuations, their serial correlation properties, and the fact that the investment component of output is about six times as volatile as the consumption component. [Prescott, 1986, p. 9]
This is fine rhetoric, well pitched for maximum impact on its target audience: economists who think of themselves as scientists, in the same sense as do physicists, chemists, biologists, and astronomers. The promise is to explain the unexplained, and indeed even "the unexplainable" (for so the author suggests has been the view of many regarding the business cycle). Prescott confidently declares that little more than "standard economic theory" is required to do the job. "All this value from just a standard package--Now that's efficiency!" one can almost hear the crowd exclaim. But exactly what is it that the author considers to be "standard economic theory"? This "standard theory" is interesting not so much in what it does but rather in what it assumes away. It is interesting indeed to see, straight from a leader of the movement, what FAME advocates feel stands alone and requires no explanation or justification.
Prescott's "model economy" is built on the Solow-style growth theory models of the 1960s. There is a single aggregate production function producing a single type of output under constant returns to scale, using two inputs--a homogeneous labor unit and a homogeneous capital unit. There is also a single type of shock--a technology shock.[40]
The problem is how to model in a dynamic context and get an interpretable result. The solution is to strip the model down to a heretofore unprecedented degree (at least in macroeconomics). A single commodity is consumed by a homogeneous "agent," who typically proxies for all other such "agents" in the economy.[41] The extent to which coordination problems--which some might see as being central to macroeconomics--can be handled in this framework is an issue thrust aside in the rush to obtain an "explicit" mathematical solution. While the typical firm's problem is to maximize profits, "[t]he household's problem is more complicated, for it must form expectations of future prices" (Prescott, 1986, p. 13). Apparently firms are not to be burdened with the difficulties of forming expectations of future prices (a neat solution to Keynes' "animal spirits" problem).
Things continue on in the same surrealistic vein. In forming its expectations,
a household knows the relation between the economy's state . . . and
prices . . . Further, it knows the process governing the evolution of
the per capita capital stock, a variable which, like prices, is taken as
given. [Prescott, 1986, p. 13]
Households just "know" these things: one will search in vain for an explanation of how or why. Does a "household" armed with such knowledge constitute the kind of "realism" necessary to profitably analyze macroeconomic events? What aspects of macroeconomics are being suppressed via such an assumption? Are these aspects central enough to macroeconomics so that anything that now follows is to be treated as conceptually highly suspect? If the world is uncertain--that is, nonergodic (see Davidson, 1991)--can households possess such knowledge? No one feels obliged to address these questions, since "what matters is not assumptions but predictions" (Kim, 1988, pp. 8-9). But the "Friedman card" cannot be played--at least not in this context (see the previous section).
After traveling down these paths, it seems almost anticlimactic that "Expectations are rational" (Prescott, 1986, p. 13), but still it is worth restating a point made (by critics) and ignored (by FAME advocates and their predecessors) many times since the early 1980s: In a world where information about the future exists but is costly to obtain, the assumption of rational expectations acquires a decidedly problematic tinge. And it is not enough to state that "this doesn't change things in principle" since "it's just another optimization problem"[42]--in a macroeconomic context, the consequences are potentially as explosive as the "rational" expectations assumption itself.[43] The real kernel, of course, is that adding another layer of optimization to the already mathematically complex problem tilts the model outside the range of "tractability," ending the quest for an "explicit" solution. There will then be no predictions--a serious problem if "what matters is not assumptions but predictions."[44]
Since "what matters is not assumptions but predictions," the next step in the process--simulation of the solved parameter model--is an interesting exercise. If, however, assumptions actually do matter for their meaning, then the exercise is meaningless in principle, since no evidence is furnished of a connection between the model and "reality." Nevertheless, the exercise is carded out, and, indeed, the model-plus-real-world-parameters do "mimic" (somewhat) the cyclical behavior of the actual economy (would we have heard about the result otherwise?). This--a "successful" simulation--is considered a crowning achievement (on camp-Friedman grounds) of the FAME movement (the reasons have been explored above).[45]
Prescott ends his piece with a triumphant flourish, eloquently boasting of the FAME movement's accomplishments in furnishing explanations of real-world patterns:[46]
Economic theory implies that, given the nature of the shocks to technology and people's willingness and ability to intertemporally and intratemporally substitute, the economy will display fluctuations like those the U.S. economy displays. Theory predicts fluctuations in output of 5 percent and more from trend. . . . Theory predicts investment will be three or more times as volatile as output and consumption half as volatile. Theory predicts that deviations will display high serial correlation. In other words, theory predicts what is observed. Indeed, if the economy did not display the business cycle phenomena, there would be a puzzle. [Prescott, 1986, p. 21]
But in truth what "theory predicts" is actually more like the following[47]: In an ergodic world with a single production function with constant returns to scale, homogeneous labor and capital, identical "households" and a single "firm"; where "households" form expectations of future prices but "the firm" doesn't; where "households" know the relation between the "state" of the economy and future prices; where they also know the "process governing the evolution of the per capita capital stock"; where expectations are "rational" and all "available" information is costless; where uncertainty is not that big a deal--then, in this world, and in no other world, we can explain why cyclical movements occur.
That is all that we know. What else are we to conclude until the assumptions of such a model have been carefully and thoroughly explored and examined, with the focus on a compare-and-contrast exercise that switches back and forth between the actual economy and the model, zeroing in on which aspects of the economy are (hypothesized to be) central to macroeconomic questions, and which aspects are (hypothesized to be) peripheral?[48] In the previous section we have seen that, when it comes to modeling assumptions, "the proof of the pudding" is most definitely not "in the eating," at least not if Friedman's methodology is to be the standard. Without Friedman, where is the justification for the approach?
Mathematical methods are here to stay in economics, and (in my view) this is, on balance, a development to the good. There are classes of problems--compact, well-defined problems for which the crucial issues are well understood and easily quantified--where mathematical analysis can quickly and neatly yield a definitive solution to an issue that otherwise might have remained controversial for years.[49] But few problems in economics--and especially in macroeconomics--are so tightly defined. It is not enough to model with one's eye on mathematical tractability and await the inevitable simulation experiment to determine the worth of one's project. As Friedman points out in the famous methodology paper that so many cite and (apparently) so few read:
[I]f we are to use effectively these abstract models . . . , we must
have a comparable exploration of the criteria for determining what
abstract model it is best to use for particular kinds of problems, what
entities in the abstract model are to be identified with what
observable entities, and what features of the problem or of the circumstances
have the greatest effect on the accuracy of the predictions yielded by
a particular model or theory. [Friedman, 1953, p. 42]
Until Friedman's advice is truly taken to heart, we will continue to become more "fully articulate" with our "model economies," and more fully inarticulate in our ability to explain, and comprehend, the behavior of the real economy--the one we live in, and, increasingly, the one we no longer work in, or with.
NOTES
1 Lucas's pioneering phrase actually was "fully articulated, artificial economic systems" (Lucas, 1980, p. 271). Since that time, the phrase "model economy" has become standard.
2 The movement is centered at the University of Minnesota and the Federal Reserve Bank of Minneapolis, while Lucas at the University of Chicago was the first leading advocate.
3 The famous quotation from the preface to Samuelson (1983) that "Mathematics is a language" no doubt is much admired by FAME advocates. That the same claim could be made about music (as Roger Garrison has pointed out) and shorthand seems for some reason to receive less attention (perhaps because it quickly leads the questioner into the kind of "compare and contrast" analysis that is most decidedly not amenable to mathematical treatment).
4 "[I]f one does not view the [Keynesian] revolution [as a revolution in method], it is impossible to account for some of its most important features: the evolution of macroeconomics into a quantitative, scientific discipline" (Lucas and Sargent, 1981, p. 296). And: "The Kydland and Prescott model . . . reopens a debate that played an important role in pre-Keynesian theory. . . . But this time around, the terms of the discussion are explicit and quantitative, and the relationship between theory and evidence can be [and is being] argued at an entirely different level. I would like to call this progress" (Lucas, 1987, p. 47).
5 The skeptical reader should consult Prescott (1986, pp, 11-14). While it is difficult to quote effectively outside of the fuller context, some of the flavor of the approach is captured by the following:
The theorems of Bewley . . . could be applied to establish existence of a competitive equilibrium for this . . . economy. That existence argument, however, does not provide an algorithm for computing the equilibria. An alternative approach is to use the competitive welfare theorems of Debreu (1954). Given local nonsaturation and no externalities, competitive equilibria are Pareto optima and, with some additional conditions that are satisfied for this economy, any Pareto optimum can be supported as a competitive equilibrium. Given a single agent and convexity, there is a unique optimum and that optimum is the unique competitive equilibrium allocation. The advantage of this approach is that algorithms for computing solutions to concave programming problems can be used to find the competitive equilibrium allocation for this economy. [Prescott, 1986, p. 12, emphasis added]
6 "Dynamic economic theory--I mean theory in the sense of models that one can write down and do something with, not in the sense of 'opinion' or 'belief' " (Lucas, 1987, p. 2). Any reader of this literature will (in my view) agree that the "something" one is to "do" is a mathematical something, and that any less "explicit" formulation simply lacks the "rigor" necessary to achieve a level of "sharpness" consistent with truly scientific expression (there is no point in assigning specific sources to the words in quotes--they are everywhere in this literature).
7 For an uncompromising statement that the social sciences are essentially at one with the engineering sciences (both being basically schools of "design"), see Simon (1969). It is noteworthy that Lucas cites Simon's piece as an "immediate ancestor of my condensed statement' (Lucas, 1980, p. 292).
8 I cannot, of course, vouch for every journal and every working paper floating around the leading departments. If there is such a tract, it has escaped citation in major FAME papers.
9 One might argue that there are methodological works from which one might in principle successfully defend FAME methodology, but this hardly establishes a connection. To cite the best-known case: I will argue below that it is impossible to get a case for FAME out of Friedman's (1953) celebrated methodological defense of "positive economics."
10 Presumably it comes from the view that true scientists are too busy doing science to bother thinking much about how it should be done. There is substantial cynicism in the profession concerning the usefulness of almost any methodological inquiry.
11 Or alternatively--where is the detailed argument that no such statement is necessary (perhaps because "true economists" are a special, separate species who instinctively just "know" what true social science is)?
12 I know of one graduate student from a leading FAME-oriented program who, when pressed at a seminar, defended a critical assumption with the profound methodological position that "economic research is path-dependent." No doubt he will want to question my assumption (but will he do so openly?).
13 While FAME advocates typically shun mere "verbal" methods of "articulation," occasionally they stoop to some shoulder rubbing with their wordy Luddite-like colleagues. Lucas (1980, 1987), Lucas and Sargent (1981). Prescott (1986), and Plosser (1989) are the primary sources for the reconstruction attempted here.
I have also quoted from Debreu (1984), whose work is an essential element of the FAME approach (Lucas, 1980, pp, 284-287, 1987, p. 2; Lucas and Sargent, 1981, p. 305; Prescott, 1986, p. 12; Plosser, 1989, pp. 55, 72). The link between Debreu and the FAME movement, which is explicitly identified by the above sources, may be a little surprising (as a referee indicates) in light of Debreu's reputation as a "pure" abstract general equilibrium theorist. However, FAME economists regard as fundamental the "rigorous" demonstration of the "existence" of a "competitive equilibrium" in their model economies, which is made possible by choosing assumptions so that Debreu's theorems can be applied to the problem at hand (see the Prescott [note 5, above] and Plosser references).
14 I am painfully aware of the dangers of "putting words into peoples' mouths," and then criticizing this position as their own. On the other hand, the sketch of FAME methodology outlined below is a good-faith estimate, based on whatever scanty information the movement has chosen to make public. It is not a critic's responsibility to assert what a school's position logically might be if only someone had gotten around to actually defining it. But how else is one to respond to a movement that is sweeping the field without once being challenged simply to name--let alone defend their fundamental views on the nature of science in their profession? If silence on methodological issues is sufficient to enforce silence on one's methodological critics, then little more than (a strategic) silence is needed to win the day.
15 "Any model that is well enough articulated to give clear answers to the questions we put to it will necessarily be artificial, abstract, patently 'unreal' " (Lucas, 1980, p. 271). This view, equating the "unreal" with the "abstract," comes from Friedman (1953), and is critiqued in section 2.
16 "Kydland and Prescott have taken macroeconomic modeling into new territory. . . . Exactly because their model carries predictions for so wide a range of evidence, it has been subjected to an unusually wide range of empirically-based criticism. . . . The chances that the model will survive this criticism unscathed are negligible, but this seems to me exactly what explicit theory is for, that is, to lay bare the assumptions about behavior on which the model rests, to bring evidence to bear on these assumptions, to revise them when needed, and so on" (Lucas, 1987, pp. 46-47).
17 "This structure [of the Solow growth model] is far from adequate for the study of the business cycle because in it neither employment nor the savings rate varies, when in fact they do. Being explicit about the [artificial] economy, however, naturally leads to the question of what determines these variables, which are central to the cycle" (Prescott, 1986, pp. 11-12).
18 "[T]he axiomatization of economic theory has helped its practitioners by making available to them the superbly efficient language of mathematics. It has permitted them to communicate with each other, and to think, with a great economy of means" (Debreu, 1984, p. 275).
19 "The benefits of the axiomatization of economic theory have been numerous.. Making the assumptions of a theory entirely explicit permits a sounder judgment about the extent to which it applies to a particular situation. . . Axiomatization, by insisting on mathematical rigor, has repeatedly led economists to a deeper understanding of the problems they were studying" (Debreu, 1984, p. 275).
20 "Rigor undoubtedly fulfills an intellectual need of many contemporary economic theorists, who therefore seek it for its own sake, but it is also an attribute of a theory that is an effective thinking tool" (Debreu, 1984, p. 275).
21 There is a deep-rooted belief among FAME advocates that there can be little systematic, careful thought about economic issues outside the context of a mathematical treatment (that this is also Debreu's view is evident from the previous note). For example, if one cannot model a dynamic process "rigorously," then apparently one cannot usefully think about such processes:
Comparison of [the Treatise on Money (Keynes 1930)] with the General Theory is useful . . . in illustrating the way that limits on our technical ability to construct explicit theory limit our ability to think productively about phenomena. . . .
The problem [in the Treatise] is not that the underlying ideas are trivial. . . .
The difficulty is that Keynes has no apparatus for dealing with these problems. [N]either he nor anyone else for his day] was well enough equipped technically to move the discussion to a sharper or more productive level. [Lucas, 1980, p. 275]
This perspective--one might label it mathematical a priori-ism--is nicely summed up by Lucas elsewhere: "But technique is interesting to technicians (which is what we are, if we are to be of any use to anyone)" (Lucas, 1987, p. 35). A succinct way to characterize the FAME approach would be as one of an extreme mathematical "rigor" and of an even more extreme methodological "ad hoc"-ery.
22 "The problem of identifying a structural model from a collection of economic time series is one that must be solved by anyone who claims the ability to give quantitative economic advice. The simplest Keynesian models are attempted solutions to this problem, as are the large-scale versions. . . . So, too, are the monetarist models. . . . So, for that matter, is the armchair advice given by economists who claim to be outside the econometric tradition, though in this case the implicit, underlying structure is not exposed to professional criticism" (Lucas and Sargent, 1981, pp. 298-299).
23 "In attempts to formalize the Friedman-Phelps natural rate hypothesis, it was soon discovered that then-conventional ways of modeling expectations-formation were both central to the issues involved and fundamentally defective. John Muth's hypothesis of rational expectations . . . turned out to be a natural way to formalize the Friedman-Phelps arguments. Subsequent research in macroeconomics has revealed the sweeping implications of this hypothesis" (Lucas, 1980, p. 283). Also see n. 19.
24 "Dynamic economic theory . . . has simply been reinvented in the last 40 years. . . . While Keynes and the other founders of what we now call macroeconomics were obliged to rely on Marshallian ingenuity to tease some useful dynamics out of purely static theory, the modern theorist is much better equipped to state exactly the problem he wants to study and then to study it" (Lucas, 1987, p. 2).
25 "The models constructed within this theoretical framework are necessarily highly abstract. Consequently, they are necessarily false, and statistical hypothesis testing will reject them. This does not imply, however, that nothing can be learned from such quantitative theoretical exercises. I think much has already been learned" (Prescott, 1986, p. 10, emphasis added).
26 Milton Friedman denies that an a priori criterion can be the fundamental test of a hypothesis (Friedman, 1953, pp. 28-30; also see section 2, below). In contrast, FAME advocates come closer to rejecting some theories (models) on what are essentially a priori grounds; specifically, where there is a lack of optimization and other "sound microfoundations." Ultimately, however, FAME advocates (at least Lucas and Sargent) back the Friedman line:
There are, therefore, a number of theoretical reasons for believing that the parameters identified as structural by (Keynesian-type) macroeconomic methods are not in fact structural. . . . Yet the question of whether a particular model is structural is an empirical, not a theoretical, one. If the macroeconometric models had compiled a record of parameter stability . . . one would be skeptical as to the importance of prior theoretical objections of the sort we have raised. [Lucas and Sargent, 1981, pp. 302-303]
(These "prior theoretical objections" charge precisely that Keynesian-type models are not based on "sound microfoundations.")
27 Thorough reviews of works criticizing the growing mathematicization of economics can be found in Beed and Kane (1991) and Quddus and Rashid (1990).
28 Which is not to say that anyone actually needs to know the answers to such questions: It is enough to "know" that the answers are definitely there and that Friedman has crushed for all time all methodological opposition. Anyway, time is too precious to use reading things in a world where Hamiltonions are plentiful, and an occasional citation of the Master is all that's required to get one's models published ("The Methodology of Positive Economics" is second only to The General Theory in the category of great economics works everybody cites and nobody reads).
29 Lucas is of course talking about those who use Keynes' works aphoristically, not Friedman's. One wonders whether FAME standards for acceptable quoting depend mainly on the quotee and whose side he is on.
30 This discrepancy has been noted by others: see Caldwell (1980, p. 372; 1982), Helm (1984, p. 132), and Hammond (1988, p. 393); also see Blaug (1992, p. 99).
31 There is now widespread recognition within the methodological community that to take "what matters is not assumptions but predictions" as an accurate summary of Friedman's essay is, at best, to advance a highly misleading and simplistic summary of a complex melange of methodological themes. Rotwein (1959, pp. 558-568, 1980, p. 1554), Nagel (1963, p. 218), Melitz (1965, p. 44), Helm (1984, p. 120), Maki (1986, 1992), Hausman (1989, p. 121), Hammond (1990, p. 170), and Blaug (1992, pp. 96-97), among others, all have pointed out that one cannot logically get from Friedman's methodology paper to the fashionable popular caricature.
There is a minority opposing view. The best known defender of Friedman as a consistent instrumentalist is Boland (1979, 1980, 1984). Maki (1986, 1992, and in Caldwell, 1987) is a rebuttal.
32 I hope the reader will permit me the use of this phrase, broadly interpreted. See Friedman (1953, pp. 10-11) for his views.
33 The tension between conflicting criteria by which assumptions are to be evaluated is a defining feature of Friedman's essay. The most thoroughgoing treatment of these ambiguities is Maki (1986, 1988, 1989, 1990, 1992), who argues that "Friedman can be read so as to make him endorse several kinds of (mutually incompatible) realism" (1992, p. 173; also see Musgrave, 1981, pp. 379-382 and Nagel, 1963). I am grateful to an anonymous second referee for bringing Maki's work to my attention. On the one band, there is the instrumentalist Friedman, for whom "[n]othing follows from acceptance of a theory about its truth and about the existence of its objects" (Maki, 1992, p. 183) (see Boland, 1979, pp. 508-509, and Caldwell 1982, pp. 178-184 for detailed discussion of the [non]relation between instrumentalism and "truth"). "Truth," "realism," etc. are terms reserved for describing concretes--with all their many characteristics--in reality (see Friedman's discussion of a hypothetical "completely 'realistic' theory of the wheat market" [1953, p. 32]). Unlike physics with its quarks and black holes, direct ("common-sense") knowledge about objects important to economics (like "firms") is available independently of theories about such objects. From this "commonsense realism" point of view, assumptions found in theories about, say, "firms" are false, since they are clearly not the actual, true-to-life business units portrayed in, for example, the pages of The Wall Street Journal.
On the other hand, there is the Friedman for whom a theory is, in part, "designed to abstract essential features of complex reality" (Friedman, 1953, p. 7). As Maki points out, such a view implicitly endorses a theoretical framework in which, at least in part, truth--the "essential features" of a problem--matters. Such "essentialist realism" implies a very different methodological position than what is commonly called "Friedman's methodology."
But as is argued in the text, it is a short, and irresistible, step from such a framework to the evaluation of assumptions by means independent of instrumentalist ("predictivist") premises.
34 Also see his discussion of "Supply and Demand" analysis and conditions necessary for its useful employment (Friedman, 1953, pp. 7-8).
35 Friedman (1953, pp. 12-14) recognizes--as anyone would--that prior knowledge guides the formation of theories, but (in my view) he doesn't seem to see its clash with his case for "prediction" as the critical test of a theory. Presumably based on this reference, there is some talk by defenders of Friedman (Boland, 1979, p. 511; Hoover, 1984, p. 790) of a "pre-filter" (using Hoover's term) through which all theories are to pass before being subjected to the decisive test of predictive success ("Having narrowed the universe of theories down to those whose known consequences are true . . ." [Hoover, 1984, p. 790]). But whether there is a meaningful "pre-filter" in Friedman's essay is at best highly debatable. What is the practical value of such a filter if it is unable to distinguish beforehand between a ball and a feather in a gravity experiment (Friedman, 1953, pp. 16-17)? Or between the relevance of a businessman's costs and the color of his eyes (Friedman, 1953, pp. 32-33; see t.he above quote)? Such a "pre-filter" really would seem to be no filter at all (as is argued in the text below). See Rotwein (1959, pp. 558-562) for a similar argument on these points.
36 Compare with Lucas and Sargent (note 26, above).
37 Others with like objections include Hoover (1984, p. 790) and Hausman (1989). The latter writes: "Without assessments of realism (approximate truth) of assumptions, the process of theory modification would be hopelessly inefficient and the application of theories to new circumstances nothing but arbitrary guesswork" (Hausman, 1989, p. 121).
38 Rotwein (1959, pp. 565-566) makes virtually this same point.
39 Another Nobel Prize winner whom FAME economists cite as having helped point the way to their movement is Hicks (see, for example, Lucas, 1980, p. 284; and Plosser, 1989, pp. 52-53). It is clear that Hicks was a pioneering advocate of what later became some of the essential points of the FAME approach; for example, the idea of organizing macroeconomics around an economy evolving through time according to some stochastic growth path. However, the reader of Hicks's writings (e.g., Hicks, 1965) cannot help noticing the excruciating care with which Hicks seeks to justify every assumption when building a model--examining which aspects of the actual economy are being highlighted and which suppressed by each assumption. Hicks may be a precursor of FAME economics, but he offers little or no support for FAME methodology. (For additional evidence that Hicks would be uncomfortable with FAME methodology, see Davidson, 1991, pp. 132-133.)
A referee suggests that the FAME movement follows, not (the popular version Friedman's methodology, but rather the views of Stigler and Becker (see Stigler and Becket, 1977; Becker, 1976, 1987; Stigler, 1984). Here the referee raises the important issue--a broader issue than that addressed in this essay--of the full intellectual origins of the FAME movement, and he or she is very right in this broader context to focus on Stigler and Becker (although, I believe. not in lieu of a focus on Friedman). Certainly the architects of the FAME movement agree with Becket that "[t]he combined assumptions of maximizing behavior, market equilibrium, and stable preferences, used relentlessly and unflinchingly, form the heart of the economic approach" (Becker, 1976, p. 5). But as Becker puts it: "The critical question is whether a system is completed in a useful way. . . . [T]he assumption of stable preferences provides a foundation for predicting the responses to various changes" (Becker, 1987, p. 7). For additional evidence of the FAME-Friedman link, see Lucas and Sargent (note 26, above). (It is due Stigler and Becker to state here that, by "preferences," they mean "underlying preferences defined over fundamental aspects of life--such as health, prestige, sensual pleasure, benevolence, or envy" [Becker, 1976, p. 5]).
40 Such assumptions might once have raised eyebrows and quite properly so (as many "Post Keynesian" and "Austrian" [see Garrison 1991] economists--among others--have pointed out in depth). But the assumptions are quite close to those of straight IS-LM macroeconomics, and as such hardly represent a major escalation in the thrust toward surrealism in macroeconomics.
41 Many FAME models go no farther. Others ("Overlapping Generations Models") postulate two kinds of "agents": the "old" and the "young." There are also "multisector variants of this model" (Prescott, 1986, p. 13) that are subject to different versions of these same criticisms.
42 "Benjamin Friedman and others have criticized rational expectations models apparently on the grounds that much theoretical and almost all empirical work has assumed that agents . . . have discovered the probability laws of the variables they want to forecast. . . .
But it has been only a matter of analytical convenience and not of necessity that equilibrium models have used the assumption . . . that agents have already learned the probability distributions they face. [The assumption] can be abandoned, albeit at a cost in terms of the simplicity of the model" (Lucas and Sargent, 1981, p. 315). The authors then cite two papers. However, looking at the FAME literature since the early 1980s makes it clear that "the cost in terms of the simplicity of the model" has been prohibitively expensive.
43 To cite just one case: if information is costly, then convergence to a "rational expectations" equilibrium could be much slower (for other problems with the FAME approach to modeling expectations formation, see Davidson, 1991). The problem of costly information is invariably hedged by assuming that agents make optimal use of all "available" information, leaving aside the fact that the amount of information at one's disposal is an endogenous, not exogenous, variable. Such hedging reflects what seems the usual presumption among FAME practitioners: specifically, that a loss in "realism" inevitably is more than made up for by the subsequent gain in mathematical tractability.
44 Uncertainty is introduced (in Prescott's piece and, usually, elsewhere) via the variance-ignoring risk-neutrality assumption; again, keeping out ugly variance terms is essential to creating a tractable model.
45 This is not the place to critique such simulations (for one devastating critique, see Summers, 1986). Personally, I would like to see the "lab-books" of the modelers to see how much "fine-tuning" was required to make the models behave so nicely. I would also like to know by what standard we are to judge a simulation as "good"-clearly, since classical statistical tests have been abandoned (another victim of "tractability"), it is reasonable to ask how many different models built on how many different assumptions could produce "mimicry" of similar quality.
46 Indeed, the reader who is concerned about policy will be delighted to discover that "[t]he policy implication of this research is that costly efforts at stabilization are likely to be counterproductive" (Prescott, 1986, p. 21). How to get from models built on conceptually arbitrary assumptions to important policy implications about the actual economy is yet another methodological miracle FAME advocates have yet to explain. It is particularly remarkable since FAME "economies" are to be "distinguished as sharply as possible in discussion from actual economies" (Lucas, 1980, p. 271).
47 The following applies strictly only to Prescott's "core" model. Analogous versions would apply to the various "extensions" discussed in the latter portion of Prescott's paper.
48 This is ugly, messy work, decidedly "inelegant." And it is wholly unnecessary if one's objective is to get a model that "one can write down and do something with" (Lucas, 1987, p. 2) (unless of course one cares particularly about the logical argument underlying whatever the "something" is).
49 One thinks for example of the debate over utility theory in the early twentieth century, or the debate over long-run cost curves several decades later.
REFERENCES
Becker, Gary S. "The Economic Approach to Human Behavior." In The Economic Approach to Human Behavior. Chicago: University of Chicago Press, 1976.
-----. "Economic Analysis and Human Behavior." In Leonard Green and John H. Kagel (eds.), Advances in Behavioral Economics, vol. 1. Norwood, NJ: Ablex, 1987.
Beed, Clive, and Kane, Owen. "What Is the Critique of the Mathematization of Economics?" KYKLOS, 1991, 44 (4), 581-612.
Blaug, Mark. The Methodology of Economics, Or, How Economists Explain, 2d ed. Cambridge: Cambridge University Press, 1992.
Boland, Lawrence A. "A Critique of Friedman's Critics." Journal of Economic Literature, June 1979, 503-522.
-----. "Friedman's Methodology vs. Conventional Empiricism: A Reply to Rotwein." Journal of Economic Literature, December 1980, 1555-1557.
-----. "Methodology: Reply" [to Hoover]. American Economic Review, September 1984, 795-797.
Caldwell, Bruce. "A Critique of Friedman's Methodological Instrumentalism." Southern Economic Journal, July 1980, 366-374.
-----. "Friedman's Methodological Instrumentalism." In Bruce Caldwell, Beyond Positivism. London: Allen and Unwin, 1982, ch. 8.
-----, ed. "Methodological Diversity in Economics" [transcript of a session held at the 1985 History of Economics Society meetings]. Research in the History of Economic Thought and Methodology, vol. 5. London: JAI Press, 1987, pp. 207-239.
Chari, V.V., and Hopenhayn, Hugo. "Vintage Human Capital, Growth, and the Diffusion of New Technology." Journal of Political Economy, December 1991, 1142-1165.
Davidson, Paul. "Is Probability Theory Relevant for Uncertainty? A Post Keynesian Perspective." Journal of Economic Perspectives, 1991, 5, 129-143.
Debreu, Gerard. "Valuation Equilibrium and Pareto Optimum." Proceedings of the National Academy of Science, 1954, 70, 558-592.
-----. "Economic Theory in the Mathematical Mode." American Economic Review, June 1984, 267-278.
Friedman, Milton. "The Methodology of Positive Economics." In Essays in Positive Economics. Chicago: University of Chicago Press, 1953, pp. 3-43.
-----. "The Optimum Quantity of Money." In The Optimum Quantity of Money and Other Essays. Chicago: Aldine, 1969, pp. 1-50.
-----. A Theoretical Framework for Monetary Analysis. New York: National Bureaus of Economic Research, Occasional Paper 112, 1971.
Garrison, Roger W. "New Classical and Old Austrian Economics: Equilibrium Business Cycle Theory in Perspective." The Review of Austrian Economics, 1991, 5, 91-103.
Hammond, J. Daniel. "How Different Are Hicks and Friedman on Method?" Oxford Economic Papers, 1988, 40, 392-394.
-----. "McCloskey's Modernism and Friedman's Methodology: A Case Study with New Evidence." Review of Social Eonomy, Summer 1990, 158-171.
Hausman, Daniel M. "Economic Methodology in a Nutshell." Journal of Economic Perspectives, Spring 1989, 115-127.
Helm, Dieter. "Predictions and Causes: A Comparison of Friedman and Hicks on Method." Oxford Economic Papers, November 1984 (Supplement), 118-134.
Hicks, John, Capital and Growth. New York: Oxford University Press, 1965.
Hoover, Kevin D. "Methodology: A Comment on Frazer and Boland, II." American Economic Review, September 1984, 789-792.
Keynes, John Maynard. A Treatise on Money. London, 1930.
-----. The General Theory of Employment, Interest, and Money. New York: Harbinger, 1965 (original 1936).
Kim, Kyun. Equilibrium Business Cycle Theory in Historical Perspective. Cambridge: Cambridge University Press, 1988.
Lucas, Robert E. "Methods and Problems in Business Cycle Theory." In Studies in Business Cycle Theory. Cambridge, MA: MIT Press, 1980, pp. 271-296.
-----. Models of Business Cycles. New York: Basil Blackwell, 1987.
Lucas, Robert E., and Sargent, Thomas. "After Keynesian Macroeconomics." In Rational Expectations and Econometric Practice. Minneapolis: University of Minnesota Press, 1981, pp. 295-319.
Maki, Uskali. "Rhetoric at the Expense of Coherence: A Reinterpretation of Milton Friedman's Methodology." Research in the History of Economic Thought and Methodology, vol. 4. London: JAI Press, 1986, pp. 127-143.
-----. "How to Combine Rhetoric and Realism in the Methodology of Economics." Economics and Philosophy, April 1988, 89-109.
-----. "On the Problem of Realism in Economics." Richerche Economiche, 1989, 43, 176-198.
-----. "Methodology of Economics: Complaints and Guidelines." Finnish Economic Papers, Spring 1990, 77-84.
-----. "Friedman and Realism." Research in the History of Economic Thought and Methodology, vol, 10. London: JAI Press, 1992, pp. 171-195.
Melitz, Jack. "Friedman and Machlup on the Significance of Testing Economic Assumptions." Journal of Political Economy, February 1965, 37-60.
Musgrave, Alan. " 'Unreal Assumptions' in Economic Theory: the F-Twist Un-twisted." KYKLOS, 1981, 34 (3), 377-387.
Nagel, Ernest. "Assumptions in Economic Theory." American Economic Review, May 1963, 211-219.
Plosser, Charles I. "Understanding Real Business Cycles." Journal of Economic Perspectives, Summer 1989, 3 (3), 51-77.
Prescott, Edward C. "Theory Ahead of Business Cycle Measurement." Federal Reserve Bank of Minneapolis Quarterly Review, Fall 1986, 9-22.
Quddus, Munir, and Rashid, Salim. "Resistance to Mathematical Methods in Economics--Past and Present." Mimeo, 1990.
Rotwein, Eugene. "On 'The Methodology of Positive Economics'." Quarterly Journal of Economics, November 1959, 554-575.
-----. "Friedman's Critics: A Critic's Reply to Boland." Journal of Economic Literature, June 1979, 1953-1955.
Samuelson, Paul A. Foundations of Economic Analysis. Cambridge, MA: Harvard University Press, 1983 (original 1947).
Simon, Herbert A. The Sciences of the Artificial. Cambridge, MA: MIT Press, 1969.
Stigler, George J. "Economics--The Imperial Science?" Scandinavian Journal of Economics, 1984, 86, 301-313.
Stigler, George J., and Becker, Gary S. "De Gustibus Non Est Disputandum." American Economic Review, March 1977, 76-90.
Summers, Lawrence H. "Some Skeptical Observations on Real Business Cycle Theory." Federal Reserve Bank of Minneapolis Quarterly Review, Fall 1986, 23-26.
9

 Copyright © 2002-2005 " ".
Rambler's Top100