The Methodological Significance of Uncertainty

Last modified by Joseph Potvin on 2023/02/17 01:53

© Joseph Potvin 2013, "The Opman Company,":http://www.opman.ca <jpotvin@opman.ca>
Version 2.01. Dual Licensed "GNU-FDL":http://www.gnu.org/copyleft/fdl.html and "CC-BY-SA":http://creativecommons.org/licenses/by-sa/3.0/legalcode

Caveat: Version 1.00 of this unpublished paper was originally prepared as an annex to "a 1992 report prepard by the author under contract to The World Bank.":http://sd-cite.iisd.org/cgi-bin/koha/opac-detail.pl?biblionumber=5755 The text has been edited for this current version, however this re-draft does not benefit from the research literature since that time. While the field of complex dynamic systems theory has advanced in the past 20 years, the basic concepts outlined here remain effectively current. Suggestions for improvements and updates via <jpotvin@opman.ca> will be gratefully received.

TABLE OF CONTENTS

Characteristics of Complex Dynamic Systems

Research across diverse disciplines has described "complex dynamic systems" that exhibit several or all of the following characteristics:

  • Unique - Each evolves through distinctive historical circumstances, both tangible and perceptual; 
  • Nonlinear - Several controlling variables interact through multiple feedback loops;
  • Discontinuous - Catastrophes and irreversible bifurcations can be internally generated; 
  • Not Predictable - Pivotal phenomena may not always be statistically significant; 
  • Pluralistic - At any moment several successional configurations coexist; 
  • Fickle - From any state there are innumerable alternative developmental pathways; and, 
  • Self-Organizing -  Open systems can, under some conditions, materializing new stable structures to accommodate persistent but moderate disturbances; (Kay 1991; See also: Holling 1986; De Angelis et.al. 1981; Nicolis & Prigogine 1989; Ulanowicz 1986; Simon 1990; Drucker 1989; Keynes 1936: 147ff)

That is to say, complex systems are terribly hard to comprehend. In their midst decision-makers face inescapable uncertainty. And yet thoughtful and timely decisions must be made.

Considerations from Information Theory

When faced with uncertainty, it is natural to seek more information before making a decision. But any measure of information will be unique to the degree of ignorance on the part of whoever specifies the question (Jaynes 1979: 37). If one is examining a series of events through a non-linear development path, "the probabilities assigned to individual messages are not measurable frequencies; they are only a means of describing a state of knowledge" (Jaynes 1979: 38-39).  If information is received in a random series, every next event will be as uncertain as the last. To someone attempting to track the system and anticipate its next moves, each new bit of information will thus be as valuable as the last. When the system under analysis behaves chaotically, then by virtue of its unpredictability, it generates a steady stream of information, which poses a problem for anyone who might try to characterize the system completely: "He could never leave the room. ... The flow would be a continuous source of information (Gleick 1987: 260)".  

Furthermore, if there are _N_ possible outcomes to an event, information theory measures the uncertainty the observer has about which outcome will occur, or conversely, the amount of information gained by the observer when the outcome actually takes place for him to see. "Probable inference theory' assigns probabilities to each possible outcome to reflect the knowledge that a specific observer may have about a unique event.  But the probability associated with each outcome will decrease as the number of possible combinations grows, and the number of possible combinations increases geometrically with the number of categories used to describe the components (Ulanowicz 1986: 85). If one knows nothing about the probabilities associated with a given set of categories, then each category is assumed to be equally probable.

Kenneth Arrow (1971) incorrectly interprets Shannon's measure of information H = -Σ pi log pi, where pi is the probability assigned to message Mi (Shannon 1948), when he states that "if a channel of capacity H is installed, then the individual knows the state of the world (Arrow 1984: 109)".  Rather, if a channel of such capacity is installed, the observer will only gain access to knowledge about whatever this particular channel has been designed to convey.  Jaynes even proposes that "Shannon's H measures the degree of ignorance of the communication engineer when he designs the technical equipment in the channel (Jaynes 1979: 38). 

J.M. Keynes's Treatise on Probability (1921) offers an especially clearheaded way forward for the analyst faced with this troubling range of uncertainties.  Keynes distinguishes between that part of our rational belief which is based upon direct knowledge (empirical observation), and that part which is based on argument (hypothesis).  He then contemplates the process by which we establish the probability that an argument would be true when empirical proof has not yet been obtained, or is simply not possible because system states are transient. This contrasts with the customary focus in probability theory upon the frequency with which some particular event will happen in the context of a consistent system state. Information is fractal and context-specific. As mentioned, uncertainty about an outcome increases as the number of possible combinations grows, and the number of possible combinations increases geometrically with the number of categories used to describe the components (Ulanowicz 1986: 85).  This is why more research may increase our knowledge without decreasing our uncertainty (Keynes 1921: 77; McDonough 1963: 81; Levitt 1991: 6).  The challenge that comprises each link in the chain of logic, is to "decide when propositions are believable enough to use in further reasoning (Cohen 1985: 119)", or as J.M. Keynes phrased it, we wish to arrive at a "rational belief of the appropriate degree" (Keynes 1921: 15). 

Keynes' emphasis originates in his dissatisfaction with classical frequency theory for the consideration of complex dynamic systems.  Classical theory defines probability as the number of times a specific outcome occurs in a series of identical events, divided by the number of times the event is observed.  However, E. Jaynes explains: 

 There is no application of probability theory in which one can evade that all-important first step: assigning some initial numerical values of probabilities so that the calculation can get started.  For even in the most elementary homework problems, such as 'Find the probability of getting at least two heads in four tosses of a coin', we have no basis for calculation until we make some initial judgement, usually that 'heads' shall have the probability 1/2 independently at each toss.  But by what reasoning does one arrive at this initial assignment? (Jaynes 1979: 16)

This is known as 'Bernoulli's Principle of Insufficient Reason', after J. Bernoulli's _Ars Conjectandi_ (1713). Keynes (1921) renamed this the 'Principle of Indifference'.  D. Kahneman and A. Tversky (1982) have demonstrated that people estimate the likelihood of a probability relation or frequency of occurrence of some event based on the ease with which its instances and frequency can be brought to mind.  A quarter century ago H. Abadzi (1990: 71), for example, remarked how long it was taking most people to become alarmed by reports of ozone layer or underground water depletion, simply because such things had never happened before and thus sound incredulous.  

In Keynes' work, uncertainty about an outcome is not related inversely to some quantifiable probability, for he describes a situation where numerically determinate probabilities are not to be had; i.e. there is an absence of probabilistic knowledge.  "About these matters there is no scientific basis on which to form any calculable probability whatever.  We simply do not know (Keynes 1936: 113)."  Dominique Chu distinguishes complexity with regard to the difficulty of creating any useful models: "The modeler always needs to draw artificial boundaries around phenomena to generate feasible models. ... Complexity occurs when contextuality and radical openness cannot be contained ... when it is not clear where the boundaries of the system are and which abstractions are the correct ones." (Chu 2011). 

In these circumstances, therefore, Keynes writes: 

 In the ordinary course of thought and argument, we are constantly assuming that knowledge of one statement, while not proving the truth of a second, yields nevertheless some ground for believing it. We assert that we ought on the evidence to prefer such and such a belief.  We claim rational grounds for assertions which are not conclusively demonstrated.  We allow, in fact, that statements may be unproved, without, for that reason, being unfounded.  And it does not seem on reflection that the information we convey by these expressions is wholly subjective.  When we argue that Darwin gives valid grounds for our accepting his theory of natural selection, we do not simply mean that we are psychologically inclined to agree with him; it is certain that we also intend to convey our belief that we are acting rationally in regarding his theory as probable.  We believe that there is some real objective relation between Darwin's evidence and his conclusions (Keynes 1921: 5).

One need not be a zoologist or botanist to accept the logic proposed by Darwin; more generally, it is admissible for a researcher in one area to use findings from other disciplines as boundary conditions for a particular theoretical and empirical inquiry (Sellers, Mintz et.al. 1986: 505).  Knowledge obtained from colleagues combines in the process of one's own reasoning and observation, to form expectations structured according to simple pictures of reality.  

Decision-Making Under Uncertainty

The mental models we create and use in order to make inferences, upon which to make decisions in the attempt to achieve our goals, could not possibly anticipate all the transformations of our complex environment.  C.S. Holling suggests:

 Expectations develop from two interacting sources: from the  metaphors and concepts we evolve to provide order and understanding, and from the events we perceive and remember. ... Experience shapes concepts; concepts, being incomplete, eventually produce surprise; and surprise accumulates to force the development of those concepts.  This sequence is qualitative and discontinuous. The longer one view is held beyond its time, the greater the surprise and the resultant adjustment (Holling 1986: 294).  

Keynes speaks of a probability-relation as a logical relation between two sets of propositions in cases where it is not possible to argue demonstratively from one to the other (Keynes 1921: 9).  From a set of chosen premises, one advances through a series of probability-relations to logical conclusions.  The joint probability of this series of propositions becomes the foundation for decision:

 Given the body of premises which our subjective powers and circumstances supply to us, and given the kinds of logical relations, upon which arguments can be based and which we have the capacity to perceive, the conclusions, which it is rational for us to draw, stand to these premises in an objective and wholly logical relation.  Our logic is concerned with drawing conclusions by a series of steps of certain specified kinds from a limited body of premises (Keynes 1921: 17-18). 

In formal logic, this is the 'probable inference' variant of 'deductive reasoning'. In any analysis of complex dynamic systems, from an initial position of maximum ignorance, uncertainty about an argument can be diminished as one formulates and resolves a series of questions to connect a string of probability-relations. These are necessary but not sufficient steps in reducing uncertainty.  They emanate from one's capability for thinking logically from premises to a view of the dynamics of the system, anticipating the categories and amount of information that will be needed to make particular decisions, and undertaking the appropriate research to generate such information. 

It is well known that uncertainty lead many people to avoid making decisions, and yet making certain kinds of decisions can reduce that uncertainty. What is required is an intelligent approach to judgement under uncertainty. In the field of artificial intelligence, P. Cohen (1985) refers to an iterative chain of reason and decision under uncertainty as the 'endorsement model'. "An important behaviour is deciding what to do when one lacks the evidence one needs.  In the model of endorsement, this involves designing resolution tasks.  A resolution task is one whose execution either reduces uncertainty, ...[or] somehow lessens the impact of the uncertainty (Cohen 1985: 51)".

Keynes proposed that an estimate of the probability of an argument rests upon a comparison between the favourable and the unfavourable evidence.  But the evidential "weight" of an argument represents the sense a person may have of the reliability of this estimate, and reflects the absolute amounts of relevant knowledge and of relevant ignorance respectively (Keynes 1921: 71).  That is to say, the _weight of an argument_ "measures the sum of the favourable and unfavourable evidence, the _probability that the argument is correct_ measures the difference (Keynes 1921: 77)".  New knowledge may serve to increase or decrease the probability of an argument, but with it comes a more substantial basis upon which to rest one's conclusion.  Since more research may increase our knowledge (the sum) without decreasing our uncertainty (the difference), direct action is the other essential step in reducing uncertainty.  Even when there is very limited information, the most effective response will often be to take immediate steps that reduce uncertainty by closing off certain future possibilities, or by making a particular range of futures more likely (Cohen 1985: 51).  

Thus, Keynes warns:

 It is difficult to see, however, to what point the strengthening of an argument's weight by increasing the evidence ought to be pushed.  ...  There clearly comes a point when it is no longer worth while to spend trouble, before acting, in the acquisition of further information, and there is no evident principle by which to determine how far we ought to carry our maxim about strengthening the weight of our argument (Keynes 1921:77).

How far we ought to go in studying a complex problem before taking decisive action is a matter of greatest significance in practical affairs. In general, an information strategy should aim to minimize the amount of information needed, while maximizing the decision-making power it bestows; the best decision-maker, then, is one that will need the least new information to make the best decisions (McDonough 1963: 72).  The use of information should be 'cost effective' in the sense that if equivalent decisions can be made based on less information, then less should be compiled.  Its use should also be 'cost efficient', such that improvements in decision-making power from the acquisition of more data should be justifiable in view of the intensity of the research effort (Belshaw 1981). 

Chaos theory has demonstrated the need for understanding general features of complex system behaviour independently of local details. If someone is to be capable of positive action for structural change, they must resist the paralysis that comes from fear of ignorance about all the local details, and use their best available knowledge to arrive at rational beliefs of the appropriate degree. 

We Live In "Imperfect Ignorance"

Imperfect ignorance is illustrated in the accompanying figure, adapted and extended from one by McDonough (1963: 81-82), which applies to both individual and social contexts. 

uncertainty_visualize_medium.png

This figure illustrates several important considerations in the design of an information strategy to support decision-making under uncertainty. Jaynes proposes that there is only one fixed point for the measure of information. This is 'perfect ignorance', represented here as maximum uncertainty at the origin of the y-axis. Progress in reducing uncertainty is made by formulating and seeking answers to a series of questions, the first of which is surely Descarte's _"Am I thinking?"_. The appropriate assumption for all analysis is thus one of 'imperfect ignorance'. The common notion in orthodox economic analysis of 'perfect knowledge' (as well as 'imperfect knowledge' which depends upon it conceptually) is an absurdity.  

In the upper half, a problem can be conceptualized more and more precisely with a greater intensity of study, beginning from a state of maximum uncertainty at the origin of the y-axis.  There will always remain uncertainty from our inability to conceptualize the dynamics of complex systems. Uncertainty is inherent in the consideration of complex dynamic systems; and there is no upper limit to how much an observer could possibly know. In response to questions raised in defining a problem, the amount of information that can be acquired also increases with the intensity of study. But this typically arrives short of answering all the questions completely. Both curves are shown as horizontally hyperbolic to convey the view that greater and greater research yields no limit to how much a problem can be conceptualized or how much information can be generated about it, yet this shape suggests decreasing returns to any particular line of research.  The increasing gap between them merely incorporates the old adage: _the more you know, the more you know you don't know_. A new set of curves like these can spring forth from any point along the 'problem definition' curve, representing the achievement of a fundamentally greater level of understanding, whereupon a new line of research germinates, while remaining dependent upon knowledge previously gained.  

In the lower portion of the main diagram, it is suggested that the cost of producing each additional bit of information tends to increase at a greater rate as more and more effort is directed to its acquisition.  In contrast, the decision-making power of additional information, in terms of relevance, timeliness, accuracy and usefulness, is considered to increase only up to an optimum level, point *b*, and then decline.  Note that the 'value of information' refers here to its value 'in use' for a decision-maker considering a particular problem, and may not relate at all to value 'in exchange'.

In the pursuit of any objective, it may be imagined that there exists an optimal information base appropriate to each particular set of decision circumstances. This is depicted in the diagram as the level of information availability in the upper half of the diagram that corresponds with point *b* in the lower half.  The curve in the lower half representing the value of information declines to the right of point *b* for two reasons.  First, the longer the period of study before a decision is made, the less value can be attached to any information produced.  Information today is worth more in any particular decision context than the same information later on.  Inordinate delaying of action for the benefit of more research will even reduce the value of information previously generated.  The second reason this curve declines past point *b* is that the decision function eventually becomes obfuscated by an oversupply of information. Joseph Levitt expresses this common problem:

 The more abundant the information, the less meaning it seems to yield.  All seems, instead, congestion and confusion.  The surest way to destroy a person's capacity for discrimination and good judgement is to bombard him or her with an enormous abundance of data, even if it's incontestably relevant.  ...  What is needed is discrimination in the supply and use of data, not their sheer abundance, regardless of relevance.  ...  Magnitudes must be limited to what is relevant and comfortably usable.  The effective use of information is governed by the principle of parsimony: limit it to the more-or-less precise purpose at hand.  A good thing is not necessarily improved by its multiplication.  The governing question is: what is the question to be answered, the problem to be illuminated, the matter to be explored, the issue to be defined. And it is precisely because these are not self-defining concepts that it is essential to think them through in advance, because no amount of data will tell you what information you'll need to get at the right questions. (Levitt 1991: 6)

The research effort cannot be solely concerned with maximizing absolute decision power under a given set of decision circumstances. Simply put: the optimal information base may cost too much. In the lower portion of our hypothetical illustration, maximum decision-making value per unit cost of information occurs at point *a*, where the spread between the two curves is greatest.  The additional study and considerable extra cost needed to develop an information base that maximizes decision-making power denoted by point *b*, only achieves, in the upper portion of the picture, a comparatively minor reduction in uncertainty.  Funds are limited, and it might not be worth the added cost.  On the other hand, the likelihood of error is greatest when information is evaluated solely according to its cost of production, for as Belshaw (1981) warned, the information system then becomes shaped according to the ease and cheapness of information recovery rather than decision priorities.  Under these circumstances, statistics that are easiest and cheapest to recover are given greatest attention, and those which are perhaps more subtle, costly and difficult to compile tend to be ignored, even if they may be truly more valuable in the assessment of a problem.  Since information about a complex dynamic system tends to be relatively difficult and expensive to acquire, it is often systematically ignored.  The research effort should thus be designed to get the decision-maker somewhere between points *a* and *b*, that is between the maximum decision-making value per unit cost of information, and the maximum absolute decision power.  Whether the desired point is closer to *a* or to *b* depends upon the nature of risks associated with wrong decisions  the greater the consequent dangers, the closer we wish to get to point *b*.  It is of great importance to stress again here, however, that generating too much information is a mistake.  Dangers are averted by getting the right amount of the right kind of information, at the right time, to people who can use it to make decisions.

The Need for Multiple Perspectives

It is easy to draw conceptual lines in a diagram, but extremely difficult in practice to intuit optimal intensities of research, especially because those who use information are normally different from those who generate it.  Reasons why the supply of information may not match the needs of decision-makers include: lack of knowledge among producers about the need for certain information; belief in an obsolete data set; and technical problems associated with the introduction of changes to reporting systems (such as time series comparability; the difficulty of setting up new systems for data acquisition) (Ware 1986: 15). To some extent these are problems that can be addressed through constructive dialogue among various producers and users of information regarding priorities, required precision, time constraints, and other practicalities.

The decision process is itself a discourse for constructive accommodation amongst people with diverse interests, the outcome of which cannot be predicted (Bruner 1979: 109). The idea of one rigorous theory to handle analysis of 'the whole' would demand drastic oversimplification (Nicolis & Prigogine 1989: 5-6; Allen & Wyleto 1983: 529-31; Allen 1987: 25). Furthermore, one cannot apply a consensus notion where there is no consensus. Pluralism in the analytical process is dependent upon contributions from many fields of concern. One frequently finds, upon close investigation, that divergent fundamental premises characterize the variety of specialized disciplines, and even amongst competing paradigms within each discipline, in addition to the scenario presented by intermingled cultures and ideologies of the international community, and to a greater or lesser degree within each country and each industry. 

Reference List

This article is adapted from Annex 1 in:
POTVIN, J. 1992. Classification and Appraisal Criteria for Conservation Investments: A Proposed General Framework. Prepared under contract to The World Bank. http://sd-cite.iisd.org/cgi-bin/koha/opac-detail.pl?biblionumber=5755

  • ABADZI, H. 1990. Cognitive Psychology in the Seminar Room. EDI Seminar Paper No. 41. Washington D.C.: Economic Development Institute of the World Bank.
  • ALLEN, T.F.H. 1987. Hierarchical complexity in ecology: A non-euclidean conception of the data space. Vegetatio 69: 17-25.
  • ALLEN, T. & E.P. WILEYTO. 1983. A hierarchical model for the complexity of plant communities. Journal of Theoretical Biology 101: 529-540.
  • ARROW, K. 1971 [1984]. The value of and demand for information. Reprinted in: The Economics of Information: Collected Papers of Kenneth J. Arrow. London: Basil Blackwell. 106-114. 
  • BELSHAW, D. 1981. A theoretical framework for data-economising appraisal procedures with applications to rural development planning. IDS Bulletin (Special Issue on Rapid Rural Appraisal) 12: 12-22. Sussex: Institute for Development Studies (October).
  • BRUNER, J. 1979. Decision-making as a discourse. In: BELL, C. (Ed). Uncertain Outcomes. Lancaster, England: MTP Press. 
  • CHU, D. 2011. Complexity: against systems. Theory Biosci. DOI 10.1007/s12064-011-0121-4. Springer-Verlag. http://www.cs.kent.ac.uk/people/staff/dfc/site/mypublications/againstSystems.pdf
  • COHEN, P.R. 1985. Heuristic Reasoning About Uncertainty: An Artificial Intelligence Approach. Boston: Pitman Advanced Publishing Program.
  • De ANGELIS, D.L. et.al. 1989. Nutrient dynamics and food web stability. Annual Review of Ecology and Systematics 20: 71-95.
  • DRUCKER, P. 1989. The New Realities. New York: Harper & Row.
  • GLIEK, J. 1987.  Chaos: Making a New Science. National Book Foundation.
  • HOLLING, C.S. 1986. The resilience of terrestrial ecosystems: Local surprise and global change. In: CLARK, W.C. and R.E. MUNN (eds). Sustainable Development of the Biosphere. Cambridge: Cambridge University Press, for the International Institute for Applied Systems Analysis (IIASA). 293-317.
  • JAYNES, E.T. 1979. Where do we stand on maximum entropy?. In: R. LEVINE and M. TRIBUS (eds). The Maximum Entropy Formalism. Cambridge: MIT Press. 15-118. 
  • KAHNEMAN, D. & A. TVERSKY. 1982. The simulation heuristic. In: D. KAHNEMAN & A. TVERSKY (Eds). Judgement Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.
  • KAY, J. 1991. A nonequilibrium thermodynamic framework for discussing ecosystem integrity. Environmental Management 15: 483-495.
  • KEYNES, J.M. 1921. A Treatise on Probability. Cambridge: Cambridge University Press.
  • KEYNES, J.M. 1936.  The General Theory of Empoyment, Interest and Money. Cambridge: Cambridge University Press.
  • LEVITT, J. 1991.  On Management. Cambridge, Mass.: Harvard University Press.
  • McDONOUGH, A. 1963. Information Economics and Management Systems. New York: McGraw Hill.
  • NICOLIS, G. & I. PRIGOGINE. 1989. Exploring Complexity. New York: W.H. Freeman.
  • NICOLIS, G. & I. PRIGOGINE. 1977. Self-Organization in Non-Equilibrium Systems. New York:
    J. Wiley & Sons.
  • SELLERS, P. et.al. 1986. A simple biosphere model (SiB) for use within general circulation models. Journal of the Atmospheric Sciences. 43: 505-531.
  • SHANNON, C. 1948. Bell Systems Technical Journal 27.  Reprinted in: C. SHANNON & W. WEAVER. The Mathematical Theory of Communication. Urbana: University of Illinois Press. 21-56.
  • SIMON, H. 1990. Prediction and prescription in systems modelling. Operations Research 38: 7-14.
  • ULANOWICZ, R. 1986. Growth and Development: Ecosystem Phenomenology. New York: Springer-Verlag.
  • WARE, H. 1986. Improving statistics and indicators on women using household surveys. Draft working paper prepared on contract for the International Research and Training Institute for the Advancement of Women (Santo Domingo), and the United Nations Statistical Office (New York).
Tags:
    

Submit feedback regarding this wiki to webmaster@opensource.org

This wiki is licensed under a Creative Commons 2.0 license
XWiki 14.10.13 - Documentation