1 Epistemology as cognitive economics
Social epistemology as the art of cognitive management
More than twenty-five years ago, the first edition of Social Epistemology (Fuller 1988) began as follows:
The fundamental question of the field of study I call âsocial epistemologyâ is: How should the pursuit of knowledge be organized, given that under normal circumstances knowledge is pursued by many human beings, each working on a more or less well-defined body of knowledge and each equipped with roughly the same imperfect cognitive capacities, albeit with varying degrees of access to one anotherâs activities?
This form of words, which now serves as the epigraph for social epistemologyâs online presence (www.social-epistemology.com), clearly suggests a vision of social epistemology as a kind of âcognitive managementâ.An appendix of the book spoke about a curriculum for âknowledge policyâ, based on the full range of resources offered by the field of science and technology studies (STS). Some of my later books, such as The Governance of Science (Fuller 2000a) and Knowledge Management Foundations (Fuller 2002), are also contributions to cognitive management. However, the spirit of this enterprise differs from that of what is normally called âcognitive scienceâ, which, as Jerry Fodor (1981) shrewdly observed, assumes a Cartesian starting point (aka âmethodological solipsismâ) that would have us understand the mind in its own terms before trying to figure out its relationship to the non-mental world. Thus, âartificial intelligenceâ has been more concerned with specifying the conditions that would qualify an entity as âintelligentâ than with whether such an entity must be an animal operating in a physical environment or can be simply an avatar in cyberspace.
In contrast, without denying the potential multiple embodiments of intelligence, my version of social epistemology considers, so to speak, the âformalâ and âmaterialâ elements of cognition at the same time. In that respect, it is closer to economics in its conception. Thus, whatever cognitive goals we may wish to pursue, we need to consider the costs, how those costs would be borne and, as a consequence, whether the goals are really worth their cost. While this economic specification gives social epistemology a concreteness that has been often lacking in contemporary theories of knowledge, it by no means involves a downsizing of our epistemic ambitions. It is simply a call for those engaged in âknowledge policyâ to provide an open balance sheet that reveals the costs and benefits behind particular strategies of cognitive re-organization. We may indeed be willing to suffer radical changes to our lifestyles and work habits, if we think a particular set of goals are worth pursuing. But wherever there is a gap, the social epistemologist has her work cut out.
In the back of my mind when I wrote those opening words in 1988 was Adam Smithâs argument for the rationalization of the division of labour in the economy as a means to increasing societyâs overall wealth. Smith observed that individuals doing everything for themselves were less efficient than each person specializing in what they do best and then engaging in exchange with others to obtain what they need. My point here is not to endorse any specific policies inspired by Smith but to acknowledge that he thought about the matter the right way in the following two senses:
- People are capable of changing even their fundamental habits if provided with sufficient reason (or âincentiveâ).
- People are a source of untapped potential that may be released by altering (âliberalizingâ) the conditions under which they are allowed to express themselves.
Many things are implied here, perhaps most importantly the plasticity of human beings and hence the openness to social experimentation. Human history has only revealed a fraction of what we are capable of. This is a faith that united both capitalism and socialism in the modern era â and one that my version of social epistemology carries forward.
Perhaps in these âtimes of austerityâ, the drive to âeconomizeâ is understood as a counsel to âdo more with lessâ in a way that presupposes that we have fewer resources than we first thought. On the contrary, when Smith and the original political economists in Britain and France â most notably the Marquis de Condorcet â promoted âeconomizingâ in the eighteenth century, they had in mind working more efficiently so as to conserve effort so that more can be done. This is the context in which greater productivity was seen as a natural consequence of the rational organization of human activity (Rothschild 2001). We are held back not by the finitude of matter but the finitude of our minds to manage matter. The benchmark for this entire line of thought was the Augustinian doctrine of creatio ex nihilo: The ultimate rationality of divine creation is that God creates everything out of nothing â that is, no effort is wasted whatsoever. And if we are created âin the image and likenessâ of this deity, which Augustine emphasized as a lesson of Genesis, then we are tasked with achieving this divine level of performance.
It is also worth distinguishing my version of cognitive management from the appeal to economics made by analytic social epistemologists, such as Alvin Goldman (1999) and Philip Kitcher (1993), who for the past twenty years have gravitated to aspects of economics that play to their default methodological individualism, whereby knowledge is sought or possessed in the first instance by individuals and then aggregated into âsocial knowledgeâ in a literal âmarketplace of ideasâ (Fuller 1996). Thus, analytic social epistemologists have fancied microeconomic models that propose the optimal flow of information, division of cognitive labour, etc. In contrast, my own sense of cognitive management concerns the macroeconomics of knowledge, which is concerned with the overall efficiency of the epistemic enterprise, what Nicholas Rescher (1978), with a nod to the US pragmatist philosopher Charles Sanders Peirce, properly called âcognitive economyâ.
The idea of âcognitive economyâ was a product of the so-called âmar-ginalist revolutionâ in the final quarter of the nineteenth century, when the study of political economy came to acquire the shape of the discipline that today we call âeconomicsâ (Proctor 1991: ch. 13). Peirce extended what had been the key conceptual innovation of that revolution: namely, the principle of diminishing marginal utility. Applied to knowledge production, this principle implies that the indefinite pursuit of a particular intellectual trajectory is justifiable not as an end in itself but only on a benefit-to-cost basis. Our best epistemic enterprises provide the most cognitive benefit at the lowest cost. This principle was explicitly proposed for science policy by the âfinalizationâ movement associated with JĂŒrgen Habermas when he directed a Max Planck Institute dedicated to the âtechno-scientific life-worldâ in the 1970s (Schaefer 1984). Their idea was that puzzle solving in ânormal scienceâ as described by Kuhn (1970) eventually suffers from diminishing marginal returns on further investment. Thus, rather than following the Kuhnian strategy of running paradigms into the ground by deploying enormous effort to make relatively little technical progress (which finally forces even the most dogmatic scientist to realize that a radical change in perspective is needed), the finalizationists after a certain point would shift resources to fields with better epistemic yields or these mature fields would be drawn together to solve standing social problems â such as cancer or environmental degradation â that escape the expertise of any particular discipline.
However, ideas surrounding cognitive economy may be deployed in other ways, such as a principle for the critical evaluation of existing knowledge systems. Across the range of national and corporate research systems, the rate of return on investment varies significantly. For example, the US may by far produce the most science, but the UK is much more productive relative to resource allocation. A comparable point may be made about educational systems. Harvard and Oxford may produce the most impressive roster of graduates, but they also have the most impressive intake of students. The âadded valueâ, cognitively speaking, of attending these institutions is probably much less than universities operating with much fewer resources that nevertheless produce distinguished graduates out of students of humbler origins. Worth stressing is that the main value associated with cognitive economy in keeping with the Augustinian point about creatio ex nihilo is best measured in terms of the opportunity costs that can be minimized or avoided, as efficiency savings make more resources available for other projects. The underlying intuition is that one acts now so as to maximize the degree of freedom that is later at oneâs disposal. I have been toying with this idea for a while, originally as âepistemic fungibilityâ (Fuller 2000a: ch. 8).
Two kinds of cognitive economy for social epistemology
To understand the dynamic of the history of epistemology as a species of cognitive economy, we need to start by distinguishing demand- and supply-side epistemic attitudes. Demand-siders proportion their belief to the need served by the belief. In other words, the more necessary the belief is to oneâs sense of self, the more it will be actively pursued. In contrast, supplysiders believe in proportion to the available evidence for the belief, even if that leads to a more diminished sense of self. Demand-siders characteristically hold that knowing is not complete without doing (i.e. generating the knowledge products that satisfy our cognitive needs), whereas supplysiders typically put in less effort in the cognitive process and expect less in return (i.e. conserving what is already known and ensuring that it does not deteriorate or become contaminated). As a first approximation, the demand-sider might be regarded as holding an âindustrialâ model of cognitive economy that is focused on increased productivity, whereas the supply-sider holds a more âagriculturalâ model that is more concerned with a steady yield in balance with the environment.
To make this distinction still more vivid, consider the demand-sider as someone who treats his ideas as opportunities to formulate hypotheses that then lead him to conduct experiments to discover something about the world that he had not previously known, which then forces him to redefine his objectives. Such a person is clearly in the business of self-transcendence. Whether his experiments have turned out to be true or false, he has acquired a power that he previously lacked. The only question is whether he has budgeted properly to reap the full benefits of that potential. This âbudgetingâ should be understood in both cognitive and material terms. In particular, the demand-sider needs to be flexibly minded to see the intellectual possibilities that are opened up by being forced to give up old epistemic assumptions as a result of an unexpected research outcome. To the supply-sider, this requires the remarkable capacity to remain mentally invested in an array of possible futures, including ones that go against most of oneâs own previous cognitive and material investments. Only a deity could be so capable of such equanimity in the face of what are bound to be many thwarted expectations. In humans such an attitude can easily look like that of Dr Pangloss, Voltaireâs satirical portrayal of Leibniz in Candide. Worse still perhaps, the supply-sider might wonder whether the demand-sider has not succumbed to what social psychologists call âadaptive preference formationâ, specifically the kind that Jon Elster (1983) dubbed âsweet lemonsâ. This is the inverse of âsour grapesâ, whereby one becomes incapable of facing failure on its own terms, always seeing the silver lining in every cloud. In the course of this self-delusion, so the supply-sider worries, the demand-sider detaches himself from any sense of security and becomes reckless with his own life â and perhaps the lives of others.
At this point, it is worth remarking that what in a comic frame might appear panglossian, in a tragic frame might come to be seen in Nietzscheâs Zarathustrian terms: âWhat doesnât kill me makes me stronger.â (Stanley Kubrickâs Dr Strangelove may be seen as someone whose identity shuttles between these two frames.) One contemporary context for understanding these two attitudes is former market traderâs Nicholas Talebâs (2012) distinction between âfragileâ and âantifragileâ approaches to life, which correspond, respectively, to the world-views of the supply- and demand-side epistemologists. Taleb generalizes the lesson that he first taught concerning âblack swansâ, namely, those highly improbable events that when they happen end up producing a step change in the course of history (Taleb 2007). His starting point is a dismissal of those who claim in retrospect that they nearly predicted such events and think they âlearnâ by improving their capacity to predict âsimilar eventsâ in the future. Such people, who constitute an unhealthy proportion of pundits in the financial sector (but also a large part of the social science community), are captive to a hindsight illusion that leads them to confuse explanation with prediction. The lesson they should learn is that prediction of extreme events is always a mugâs game. Rather, what matters is coming out stronger regardless of how oneâs future predictions turn out.
In Talebâs presentation, antifragility belongs to a tripartite distinction in world-views, roughly defined in terms of how one deals with error or unwanted situations more generally. The âfragileâ agent is one who needs to control the environment in order to maintain its normal condition. A slight shift in the environment can result in devastating consequences. In supply-side epistemology terms, this is the problem of scepticism. In contrast, the ârobustâ agent maintains its normal condition in response to changes in the environment. But an âantifragileâ agent always maintains or improves its current condition as the environment changes, without any preordained sense of normality. A sense of the difference between a ârobustâ and an âantifragileâ agent is captured by, on the one hand, a gambler who is simply concerned with always being able to return to the casino no matter how his bets turn out and, on the other, a gambler who always bets so that his losses can never outpace his wins, which generally means placing a somewhat larger than expected bet on improbable events and a somewhat smaller than expected bet on probable ones. The robust gambler does it as a hobby; the antifragile one does it to make a living.
The key to the antifragile mentality is what Taleb calls âoptionalityâ, namely, the use of degrees of freedom as a proxy for knowledge. In other words, if you do not know what will happen, make sure you have most options covered. In gambling circles, it is called âspread bettingâ,andthere is an art to exactly how much one should underestimate continuity and overestimate rupture with the past in order to profitsignificantly in the long term. Interestingly, some computer scientists hypothesize that intelligence dawns in physical systems that conserve their potential, neither by responding similarly to all contingencies nor by trying to limit the contingencies to which they are exposed. Rather, intelligence emerges from keeping as many options open as possible so that the agent flourishes regardless of the contingency encountered (Wissner-Gross and Freer 2013). In practice, this implies a regular process of sorting the wheat and chaff in oneâscognitive horizons â that is, distinguishing the features that need to be preserved in any possible future from those that may be abandoned once they appear to be a liability, thereby resulting in a sense of âsustainable errorâ.
In any case, this process is psychologically much more difficult than it seems for two reasons, one obvious and the other subtle. Obviously, as the supply-side epistemologist would stress, much of our sense of realityâs stability rests on the future continuing the past being a âsure betâ. Why then waste time and money on outliers? Nevertheless, Taleb counsels that it is better to run slightly behind the pack most of the time by devoting a small but significant portion of your resources to outliers, because when one of them hits, the rewards will more than make up for the lower return that you had been receiving to date. This raises a subtler psychological difficulty with antifragility: Once you decide that your bets require redistribution â say, in light of failed outcomes â how do you preserve the information that you learned from your failed bets in your next portfolio of investments? Rarely is the matter as straightforward as simply shifting out of the failed bets to the ones that did best, since the latter may be only temporarily protected from the same fundamental problems that led your other bets to fail. In other words, every failure provides an opportunity for a fundamental re-think about all your bets, including the successful ones. This is how âlearningâ, properly speaking, is distinguished from mere âsurvivingâ over time. In that sense, you really never reduce uncertainty but you learn to game it better.
Talebâs (2012) main piece of advice here is that oneâs epistemological insight is sharpened by having âskin in the gameâ, to use the gangster argot for having a material investment in the outcomes. Scornful of academic and other professional pundits, who are paid to issue predictions but are not seriously judged on their accuracy, Taleb dubs them the âfragilistaâ because they are insulated from the environments to which they speak. Thus, they have the luxury of behaving either like âfoxesâ or âhedgehogsâ in the political psychologist Philip Tetlockâs (2005) sense: that is, they can simply mimic the trends or stick with the same position until it is borne out by events. They have no incentive to think more deeply about the nature of the reality that they are trying to predict.
The history of epistemology as a struggle over cognitive economy
Immanuel Kant originally glimpsed the demand- and supply-side epistemic attitudes towards the management of knowledge production at the end of modern epistemologyâs cornerstone work, Critique of Pure Reason (1781). In that work, demand- and supply-side epistemology are f...