Measurement, Quantification and Economic Analysis
eBook - ePub

Measurement, Quantification and Economic Analysis

Numeracy in Economics

  1. 480 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Measurement, Quantification and Economic Analysis

Numeracy in Economics

Book details
Book preview
Table of contents
Citations

About This Book

Most economists assume that the mathematical and quantative sides of their science are relatively recent developments. Measurement, Quantification and Economic Analysis shows that this is a misconception. Its authors argue that economists have long relied on measurement and quantification as essential tools.
However, problems have arisen in adapting these tools from other fields. Ultimately, the authors are sceptical about the role which measurement and quantification tools now play in contemporary economic theory.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Measurement, Quantification and Economic Analysis by Ingrid H. Rima in PDF and/or ePUB format, as well as other popular books in Commerce & Commerce Général. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2002
ISBN
9781134879236
Edition
1

Chapter 1
From political arithmetic to game theory

An introduction to measurement and quantification in economics


Ingrid H.Rima

Contemporary economists have only limited appreciation of the extent to which early practitioners recognized that, by their very nature, the problems in which they were interested required them to measure, quantify and enumerate. From the seventeenth century onwards inquiring minds had already learned to distrust information and ideas that derived from the then traditional qualitative approach to science, which described the sensations associated with objects and events. Much like Galileo, who saw the universe as “written in mathematical characters” (Galileo 1628:232), so William Petty, responding to the directives of nationalism and commerce, undertook “to express myself in Terms of Number, Weight or Measure, to use only arguments of sense and to consider only such causes as have visible foundations in Nature; leaving those that depend upon the mutable Minds, Opinions, Appetites and Passions of particular Men, to the considerations of others” (Petty 1690). He called his method “political arithmetick” and aimed not simply to record and describe reality in terms of numerical variables, but to use them as mental abstractions for the purpose of understanding whether there is a relationship between them and to use the information to accomplish some purpose.
Petty’s pioneering work is by no means unknown, even by some who are not specialists in the history of economic thought.1 There are also numerous more or less well known studies relating to the work of individual contributors, Cournot, Walras, Jevons, Edgeworth and Fisher, among others, who relied heavily on mathematics and/or statistics long before economics reached its present stage of mathematical formalism and model building.2 Yet concern with measurement and quantification tools per se and their development in tandem with economic description and analysis is a relatively new area of economic research. Historians of economics, historians of science and philosophers whose interests extend to issues of methodology are finding new connections among their research interests. The conjoining of academic disciplines which have, in the past, been separate and distinct fields of investigation is, in part, a reflection of the discomfort which many economists feel about the methodology of their discipline and its status as a science. It lends perspective to recognize that econometrics is simply the most recent (though no doubt the most powerful) quantitative tool to have been pressed into service by those who study economic phenomena. Political economists of the seventeenth and eighteenth centuries, and the analytical economists who followed them, borrowed or refined whatever measurement and quantification tools they found useful, and supplemented them with some of their own invention. Thus the myopia underlying the popular view among many contemporary economists that their predecessors were literate but not numerate is curious to those who have greater historical perspective.3
It is also relevant, as a matter of perspective, to recognize that criticisms of the uses of measurement tools are quite as old as their applications. Specifically, what sort of empiricism is appropriate to the study of economics? Every writer who has sought to provide methodological guidance for scientific practice in economics—Adam Smith, W.N.Senior, J.S.Mill, J.E. Cairnes, W.S.Jevons, J.N.Keynes, Lionel Robbins, J.M.Keynes and Terence Hutchison—all struggled with this still unresolved question. Contemporary methodologists are revisiting their arguments, especially as they have resurfaced in the context of present day formalism, econometric practices and gaming experiments. The “measurement without theory controversy” that pitted early twentieth-century empiricists against one another, along with the “value judgments” controversy that came to the fore as a byproduct of the measurability of utility issue, and the empirical verification arguments that arose in the context of logical positivism, of which the Lester-Machlup controversy about maximizing assumptions was a prime example, have reappeared in new contexts and in modern garb. Their writings suggest that some of the issues that are central to contemporary theoretical and methodological controversy are embedded in contemporary measurement and quantification tools in ways which historians (Epstein 1987; Mirowski 1990; Morgan 1990; Schabas 1990) and philosophers (Hausman 1992) have only begun to discern. To continue and extend their research agendas in a broader context is essentially to give further momentum to interdisciplinary inquiry into the epistemology of measurement and quantification in economics.
Because the research that is the focus of this anthology cuts across a whole range of historical and contemporary topics of interest to economists, it exceeds by far the agenda of any single researcher. Thus it became the product of intellectual networking with many researchers who, in some cases, were unaware of elements of commonality amongst their individual efforts. Many though by no means all of the contributors to this volume share an interest in the history of economic analysis. Others have research interests that relate chiefly to contemporary microeconomics and/or macroeconomics and their applications. Yet all have thought deeply about the purported capability of mathematics as a more precise means of expression than language (Samuelson 1947). They have also reflected on the conventional wisdom that hypothesis testing to confirm theories (and their implied policies) and infer new theoretical relationships from them is the optimal vehicle for establishing linkages between economic theory and empirical reality.
From Petty’s political arithmetic to Lucas’s rational expectations models and von Neuman’s growth theory, empiricism and deduction have vied for pride of place as the critical instrument for discovering and evaluating knowledge and the question of their relationship has been central to the methodological literature of the last half century.4 Yet, general methodological discussions do not offer the same contextual perspective as inquiries that are more specifically concerned with measurement and quantitative techniques per se. That is the advantage of the papers in this volume. Their first concern is to offer an analytical and technical account of particular measurement and quantification tools, and to examine them within the theoretical framework of their development. Each paper stands on its own; yet each also contributes to a broad-based epistemological inquiry. Three broad stages can be identified as having taken place in tandem with the development of economic theory.5 During the first relatively brief stage which roughly coincides with latter day mercantilism, the essential role of measurement and quantification tools was to serve as a policy instrument. During the second stage the chief concern of measurement and quantification techniques was the discovery of static economic laws. In the present third and still ongoing stage, economics is perceived to have emerged, at last, as a true science; mathematical formalism is relied on to create economic models which are joined to gaming experiments and econometric tests to evaluate whether the outcomes they generate from data are consistent with the model’s predictions. Much of the tension about methodology today derives from questions relating to the effectiveness of these instruments for gaining knowledge about economic behavior and outcomes.

STAGE ONE: MEASUREMENT AS A POLICY INSTRUMENT

As early as the seventeenth century, political arithmeticians identified with the empiricist view that the natural world exists separately from human beliefs, philosophies and preferences. Although the first stage in the development of measurement and quantification techniques was relatively brief, its practitioners left a legacy to economics that is important both for its own sake and because it captures so clearly the essentially quantitative nature of early political economy in the service of the policy aspects of English mercantilism. Together with John Graunt and Gregory King, William Petty established much of what is now known about the behavior of crops, livestock and human population during the eighteenth century. Their best remembered French counterparts include François Bois Guilbert and Marshall Vauban, who were particularly concerned with the practical problems of France’s agricultural sector, and collected much of the statistical information about the French economy that later served as a basis for the Physiocratic single tax proposal. Their concern was to focus on formulating the kinds of economic and social reforms that would contribute toward making France’s economy more efficient, just as England’s political arithmeticians were concerned with making their nation wealthier and politically more powerful.
It is not without relevance for contemporary practitioners of economics that the political objectives of mercantilism and Colbertism became the source of their undoing as an intellectual tradition. As Robert Dimand argues in his chapter “‘I have no great faith in political arithmetick’”: Adam Smith and quantitative political economy”, political arithmetic is the first major example of a tradition that was criticized and rejected chiefly because of the political objectives its findings were used to support. Smith’s disenchantment with the political arithmetic of his contemporaries was less a reflection of his critical assessment of their methods or findings (which he did not hesitate to use in support of some of his arguments) than it was a reflection of the changing political environment and methodological perspective of the eighteenth century. In keeping with the natural order philosophy of the enlightenment, political economists from Smith onward came to rely on deductive logic to articulate a vision that the economy is composed of self-interested individuals whose actions will bring about beneficial results for all market participants. Smith’s commitment to deduction was critical in bringing about an early end to the first stage of the development of measurement and quantification techniques. It also set the pattern of ongoing methodological conflict about the role of measurement and quantification tools in relation to economic method and the search for an understanding of the functioning of the economy.

STAGE TWO: QUANTIFICATION TECHNIQUES AND ECONOMIC LAWS

Data collection and induction

It was not until the Industrial Revolution transformed the relatively bucolic social order of the Smithian era and imposed new hardships on the working classes that the practical necessity for data collection and measurement tools arose again. Two factors were critical: one was the concern of inductivists to counter what they perceived as the failure of Ricardian deduction to develop hypotheses consistent with the real world and the need to provide a factual basis for policy. The second was the need for information, especially by businesses. The establishment of “The Statistical Section” of the British Association for the Advancement of Science (later Section F) and the Statistical Society of London (later the Royal Statistical Society) and the development of techniques to represent and analyze empirical data relating to virtually all areas of social existence marks the onset of the second stage in the development of measurement and quantification tools. The Association’s hierarchical classification of the sciences became central to establishing the scientific authenticity of inductive political economy by virtue of the connections that the then maturing discipline of statistics was expected to establish between economic theory and the real world. Leading nineteenth-century British thinkers were cognizant that the classification of branches of knowledge lays the foundation for power to arrive at truth. They envisioned this classification as having the potential for even greater impact if the moral, political and metaphysical as well as the physical portions of our knowledge could become encapsulated in it (Whewell 1847: V.II, p. 113). James Henderson’s chapter “Ordering society: the early uses of classification in the British statistical organizations” examines the relationship between statistics and economics as the first of the nonphysical “portions of our knowledge” to gain admittance among the categories of science.
Henderson’s particular focus is on the role of data collection and statistical science not only to establish “correct views” about the moral sciences and their relationship to the physical sciences, but also to substantiate the impoverished nature of Ricardo’s system of deduction vis-à-vis the process by which science is established. The methodological attack against Ricardian deductive economics was thus accompanied by both a renewed urgency for fact-gathering (reminiscent of the political arithmeticians) and a concern about establishing a basis for deriving economic laws and for mounting a social policy.6
Jeremy Bentham, along with other philosophical radicals, confronted the problem of defining and measuring the “greatest good” in order to provide a basis for reforms intended to increase the “sum of happiness” by means of social policy. Of necessity they concerned themselves with the practical possibility of interpersonal utility comparisons and the related problem of evaluating the “quality” of pleasures so that a hedonic balance sheet might be drawn up for the guidance of policy makers. It was largely on the stumbling block of evaluating the subjective gains and losses that would be associated with any policy measures that might be adopted that utilitarians floundered. These difficulties are the focus of Sandra Peart’s chapter “Measurement in utility calculations: the utilitarian perspective”. While some, F.Y.Edgeworth among them, envisioned the possibility of a “hedonometer” to lend precision to the measurement of pleasure, nineteenth-century British economists ultimately disassociated themselves from the notion that utility is measurable, and from the kinds of reform programs that depended on it.7
As the impracticality of the hedonic calculation in economics became apparent, the potential for appropriating the measurement and quantification tools that had already been used successfully in other fields became challenging. Analytical geometry and graphic representations had long been in use in physics, ballistics and meteorology, but, as Judy L.Klein points out in “The method of diagrams and the black arts of inductive economics”, they were seldom used in economics until the early nineteenth century. Daniel Bernoulli’s curve (1738) represented the utility of additional income logarithmically and was a forerunner of later works which addressed practical economic problems such as the demand for rail and other public utility services with the help of techniques borrowed from other disciplines. Many of these lent themselves to record keeping and data analysis from which numerical and graphic representations were undertaken.
By the end of the nineteenth century, the techniques developed by engineers as economists included the convention of plotting time on the horizontal axis to represent changes in related quantities. Because there was no apparent methodological need for distinguishing between static data points and data changes over time, the essential difference between logical time and historical time typically became blurred; for example, it is now recognized that the numerical facts underlying a demand curve only have meaning in terms of logical time. Yet the relevance of this point would not become apparent until the general equilibrium paradigm focused attention on the conceptual problem of time in economics (Robinson 1974).

Statistical causality and economic laws

Early data collection practices also derived from business needs for information. The business press, in particular The Economist and the Commercial and Financial Chronicle, reported changes in such critical business magnitudes as imports, exports, rail traffic, commodity prices, bank reserves and so forth on a weekly and monthly basis. Judy Klein’s chapter “Institutional origins of econometrics: nineteenth-century business practices” notes that descriptive terms such as “animated”, “depressed” or “fair” were typically combined with numerical averages to describe market conditions. William Stanley Jevons used these as key data sources in his studies of the causal relations that underlie the temporal phenomena of commerce. The “law of seasonal variation” which he represented in tabular and graphic form was an early technique for transforming the “rule of thumb” knowledge of merchants into statistical plots of seasonal variations by using the concepts of statistical populations and the commercial practice of calculating arithmetic differences. These later became the foundation for rate charts, the representation of successive averages as a trend, and the statistical technique of index numbers. While the path from political arithmetic to statistical methods for discerning cause-effect relationships was neither continuous nor smooth, by the nineteenth-century the operative premise was that careful data collection coupled with methods appropriate to data utilization can generate “true” theories about observable economic phenomena and thus direct economics towards “exact” laws that are analogous to those of the natural sciences.
There were, nevertheless, those who, like John Cairnes, continued to cling to the contrary view, very largely learned from J.S.Mill, that economics is and always will be an inexact science. This is the focus of Jinbang Kim’s chapter “Jevons versus Cairnes on exact laws in economics”. Jevons’s immediate concern was the then critical question of the implications of the probable increase in England’s coal consumption. He believed that it was not necessary to rely on a priori principles of evaluation because the principle of probability could serve as a basis for establishing economic laws. Though Jevons may be faulted for his inadequate understanding of probability theory and his failure to use Laplace’s superior technique of least squares estimates, he did manage to establish the viability of a statistical approach to economic theorizing.
The latter is much in evidence in the development of the theory of demand and statistically generated demand curves. Nancy J.Wulwick’s chapter “A reconstruction of Henry L.Moore’s demand studies” reviews and reconstructs Moore’s estimates of the demand for pig iron in order to set straight the still misunderstood reason why Moore’s estimates generated upward-sloping curves which led to the criticism that Moore “misunderstood Marshall”. Her replication of Moore’s estimates establishes that it was Moore’s critics who misunderstood; their failure was that they did not understand the identification problem he confronted. The supply and demand data which are the basis for his scattergrams can only generate what Klein would describe as “fact curves” in logical time, whereas Marshall’s demand diagram is a “law curve” in logical time. Wulwick’s reconstruction of Moore’s study has thus disentangled the identification problem which his critics failed to discern as the root of their criticism. Her reconstruction is all the more useful from a methodological perspective because it bears so directly on the concerns of many contemporary economists about the phenomenon of time.8

Economic statistics and dynamics

Since Marshall economists have overwhelmingly been preoccupied with equilibrium tendencies, despite their awareness that the dynamics of capitalistic economics generate wide fluctuations in the level of prices, output and employment. The prob...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Figures
  5. Tables
  6. Contributors
  7. Chapter 1: From political arithmetic to game theory: An introduction to measurement and quantification in economics
  8. Chapter 2: “I have no great faith in political arithmetick”: Adam Smith and quantitative political economy
  9. Chapter 3: Ordering society: The early uses of classification in the British statistical organizations
  10. Chapter 4: Measurement in utility calculations: The utilitarian perspective
  11. Chapter 5: Institutional origins of econometrics: Nineteenth-century business practices
  12. Chapter 6: The method of diagrams and the black arts of Inductive economics
  13. Chapter 7: Jevons versus Cairnes on exact economic laws
  14. Chapter 8: A reconstruction of Henry L. Moore’s demand studies
  15. Chapter 9: The probability approach to index number theory: Prelude to macroeconomics
  16. Chapter 10: The indicator approach to monitoring business fluctuations: A case study in dynamic statistical methods
  17. Chapter 11: The delayed emergence of econometrics as a separate discipline
  18. Chapter 12: Some conundrums about the place of econometrics in economic analysis
  19. Chapter 13: The right person, in the right place, at the right time: How mathematical expectations came into macroeconomics
  20. Chapter 14: The New Classical macroeconomics: A case study in the evolution of economic analysis
  21. Chapter 15: Experimenting with neoclassical economics: A critical review of experimental economics
  22. Chapter 16: The Carnot engine and the working day
  23. Chapter 17: The problem of interpersonal interaction Pareto’s approach
  24. Chapter 18: Is emotive theory the philosopher’s stone of the ordinalist revolution?
  25. Chapter 19: If empirical work in economics is not severe testing, what is it?
  26. Chapter 20: Econometrics and the “facts of experience”
  27. Chapter 21: Simultaneous economic behavior under conditions of ignorance and historical time
  28. Chapter 22: Liapounov techniques in economic dynamics and classical thermodynamics: A comparison
  29. Chapter 23: The Hamiltonian formalism and optimal growth theory
  30. Chapter 24: The impact of John von Neumann’s method