I have been intrigued for quite some time to find the basis of a theory or law or simply a hypothesis, as in today’s world of information explosion, every event or information associated with it brings in a two sided view, almost diametrically opposing each other and if that becomes a logical framework for arguments to be developed and data to be vetted, the denouement is more likely to be skewed towards a probabilistic expression, rather than a definitive or what Keynes referred as ‘demonstrative certainty’ or ‘conclusive’ certainty in his book ‘The Treatise on Probability’. I did find in awe the apparent lack of concern for the probabilistic expression and the adjustments that are needed to be made around it, when businesses and economies set out to do their short and long term planning.

This leads me to believe that there is something inherently missing in today’s world of understanding of what constitutes a rational and logical premise behind the framing of a hypothesis or a judgment, whether in the scientific or the non-scientific area and the limits of cognitive biases tend to pose grave risk to the application of such judgments in areas they are not meant to demonstrate certainty of results; the experimental and probabilistic nature of some of the base work leaves a lot to be desired if a treatise or hypothesis is to be separated from just a conjecture. More on this as we proceed to unravel how some of the basic laws of Science have been challenged with new knowledge and with an open mind of enquiry, both being the vital ingredients of the human spirit of enquiry, which separates the personal nature of the scientific bias from the universal and more noble nature of enquiry, which does not necessarily seek an outcome to be either conclusive or inconclusive.

So we go back to the very foundation of knowledge that makes us believe an axiom, or a hypothesis must be based on certain principles either of cause and effect, or of provability, or simply on an empirical relationship, but the very core principle must be based on what Karl Popper proposed, ‘Falsifiable’. This however puts every discovery to be a subject of ‘limited’ nature of human knowledge and the uncertain nature of every denouement that is pending further experimental proof. This simply makes ‘Truth’ only partial, as the falsifiable nature of its existence makes it rather fragile. In search of truth there is an element of skepticism that was for the first time brought to the fore by Hume in his Treatise of Human Nature. We know that in many of the scientific research that has to deal with data, computations and logical reasoning, we cannot avoid the basic principles of scientific logic, which is based on either inductive reasoning or deductive methods or based on the Probabilistic Testing of a Hypothesis.

The core of the problem of inductive reasoning was highlighted by Hume in his Treatise on Human Nature, where he remained skeptical till the very end that induction as a method of reasoning, which itself could not be proved (otherwise it would be a circular argument), had severe limitations as a basis for scientific logic or theories. He started with the explanation of cause and effect and then detailed the role to be played by repetitive instances that is used in induction as follows: “Suppose two objects to be presented to us, of which the one is the cause and the other the effect; it is plain, that from the simple consideration of one, or both these objects we never shall perceive the tie by which they are united, or be able certainly to pronounce, that there is a connection betwixt them. It is not, therefore, from any one instance, that we arrive at the idea of cause and effect, of a necessary connection of power, of force, of energy, and of efficacy. Did we never see any but particular conjunctions of objects, entirely different from each other, we should never be able to form any such ideas. “But again; suppose we observe several instances, in which the same objects are always conjoined together, we immediately conceive a connection betwixt them, and begin to draw an inference from one to another.

This multiplicity of resembling instances, therefore, constitutes the very essence of power or connection, and is the source from which the idea of it arises. In order, then, to understand the idea of power, we must consider that multiplicity.”

This leads us to the definition of inductive reasoning as ‘inference from particular instances’.

This is demonstrated by John Vickers as: 1. a1, a2, …, and are all Fs that are also G. 2. an+1 is also F. Therefore, 3. an+1 is also G. I go back to Hume and acknowledge the deep skepticism he had on this process of reasoning and he adequately cautioned us of the wrong interpretation of cause and effect by giving a simple example of heat and pleasure while enumerating his eight postulates of cause and effect; his seventh postulate runs as follows: “When any object increases or diminishes with the increase or diminution of its cause, it is to be regarded as a compounded effect, derived from the union of the several different effects, which arise from the several different parts of the cause. The absence or presence of one part of the cause is here supposed to be always attended with the absence or presence of a proportionable part of the effect. This constant conjunction sufficiently proves, that the one part is the cause of the other.

We must, however, beware not to draw such a conclusion from a few experiments. A certain degree of heat gives pleasure; if you diminish that heat, the pleasure diminishes; but it does not follow, that if you augment it beyond a certain degree, the pleasure will likewise augment; for we find that it degenerates into pain.” So if I apply this in the mathematical form, if. a1, a2, …, and an are specific element of heat and if they combined in the entity of F, and if G is the embodiment of pleasure in a cold surrounding, then every increment in it would raise the pleasure up to a certain point and then revert in the other direction (negative) when that point is exceeded, thus an+1 would never be also G, which is pleasure if the point n is threshold of pleasure.

Thus we come to the most critical observation by Hume in his latter statements, which I have carefully elucidated from his immortal Treatise: “There is no Algebraist or Mathematician so expert in his science, as to place entire confidence in any truth immediately upon his discovery of it, or regard it as anything, but a mere probability. Every time he runs over his proofs, his confidence increases; but still more by the approbation of his friends; and is raised to its utmost perfection by the universal assent and applauses of the learned world. Now it is evident, that this gradual increase of assurance is nothing but the addition of new probabilities, and is derived from the constant union of causes and effects, according to past experience and observation.


“In all the incidents of life we ought still to preserve our skepticism. If we believe, that fire warms, or water refreshes, it is only because it costs us too much pains to think otherwise. Nay if we are philosophers, it ought only to be upon skeptical principles, and from an inclination, which we feel to the employing ourselves after that manner. Where reason is lively, and mixes itself with some propensity, it ought to be assented to. Where it does not, it never can have any title to operate upon us.”

This skepticism of Hume was more than put to a general scrutiny by the brave Karl Popper, when he brought the concept of “falsifiability”, which brought to the fore the thought that Science is merely a search for Truth, but never an end in itself in declaring that Truth in any way has been reached that is finally cast in stone and cannot be altered in any form. But before I delve into Popper, I would like to summarize some of my personal thoughts on the subject, where I would again eulogize Hume for guiding me to it. In every observation, we are guided by our senses and our senses are in turn guided by a mind, which is sometimes not under the influence of reason. Our observations are therefore never perfect in their ability to resemble the exactness of what they stand for. It is somewhat like the quantum question that both position and momentum cannot simultaneously be measured; our minds measure or observe to a certain degree of probability, but never as accurately as if it resembled what it had embarked to study.

Thus when large numbers of such observations are put to test, there is bound to be variability to the original intent; this probabilistic nature of mind’s ability cannot be ignored and its impact on the final judgment is worth a thought. This is further compounded by the bias that minds carry, some of this is inherent in the experience that the mind has inculcated and the extent of memory that it can draw its conclusions from; there is after all a limit of memory and therefore the mind is more likely to reject a few observations or instances or experiences simply for want of memory space.

This aspect is also likely to have a bearing on the result. Bartley’s immortal book, “The Retreat to Commitment” holds some glaring examples as to the nature of the question of dilemma of ultimate commitment. The biggest impediment to any free thinking is the ‘belief’ embedded in the mind itself, before any hypothesis is even formed through observation or data gathering. On the contrary some of these beliefs tend to influence the very hypothesis and the gathering of instances or data that would later prove the hypothesis itself. I have found many managers more influenced by their beliefs that they later confirm by selecting data carefully from a random pool. The predilection of the mind towards certain instances as a preference over other instances is more likely to have a bearing on the result and this holds the biggest drawback for inductive reasoning and thus scientific truth cannot be verified by mere induction; whereas the logic of falsifiability makes us much more comfortable when we deal with testing.

The simplest example is the case of the statement: Man is mortal: This is not falsifiable as we cannot find one person who is not mortal and as we may have millions of examples of people dying to prove that man is mortal but it may not be good enough as there still remained the chance that one could turn out to be living forever, although that probability is extremely low. Man is immortal: This is easily falsifiable, as a single death would prove this statement wrong. Popper’s moot point is “The criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.” While a mathematical statement 2 + 2 = 4 is never falsifiable, it simply gives us a deep insight to the nature of the problem of science that such mathematical truths which can be falsified, like 2+2 = 5, makes us believe with more certainty this statement as opposed to the previous, which cannot be falsified in a more intelligible and scientific footing.

I will now take examples from Economics to show how many of these theories are not falsifiable and therefore must be held with due skepticism as to the nature of their applicability. I have found humility much in want in the great economic and political thinkers starting from Adam Smith to our current pundits in the likes of Krugman, although I personally admire all of them for their learned and systematic approach, their honesty in portraying their version of truth is beyond doubt.

Economics owes much to the empiricism of Smith, Ricardo and Malthus, although all three of them demonstrated their bias towards examples that they pursued and did not venture to take those that were weak as part of general observations than treat them as special. This tendency to treat differently similar outputs stemming from the same market conditions is a problem that has continued all along to the times of Friedman.

The very benign example that Smith starts with in his Wealth of Nations concerning butchers and extending that as a general gospel of truth to explain what role self-interest plays is a sweeping generalization of what constitutes an economic activity. Or to say that the invisible hand of the market makes things work so that buying and selling could continue unimpeded is another example highlighting the same cause. That these were right examples but only one sided and under specific conditions that may not be replicable in other conditions, could have spared the study of economics to have gone in the direction that Biology had taken post Darwin; the difference lay simply in the humility that the latter demonstrated and the former had much in want. The lack of self-doubt in many of the economists and the presence of the same in the Biologists, had led to sweeping discoveries in the latter. On the contrary Economics has progressed, but much to the derogatory fallouts that the aftermath of deep crises left economies in the lurch of eventual bail-outs at huge cost of the tax-payer. No economist could either predict or change the course of economies when knowledge and theories were in plenty to guide; it seemed that knowledge and theories followed the events to learn from them.

The difficulty with Economics is that nothing what is said can be falsifiable, almost like the ancient astrological predictions. This only proves the need for being humble rather than arrogant, a trait that has become the dominant part of every economist’s staple of virtue. Here we must acknowledge the commitment one has to first demonstrate towards impartiality and absence of bias, and here one has to allude to the fact that economists are all right, but only on the average as I have never found any single average data wrong (every statistic is predicted rightly on the average basis with one economist predicting the highest and the other one the lowest, upper and lower bound neutralizing each other over sample data points).

The very role of Statistics in economic forecasting is itself on weaker ground, simply because the samples are never the true representation of the population and when it comes to human interpretation of some of the data, only sky is the limit. Starting with Say’s law (Supply finds its own demand) to Phillip’s curve on inflation Vs Unemployment, to the more recent Efficient Market Hypothesis, we see the approach of Economists to empirically state what can never be falsified. This approach of sticking to induction, to believability of self-proclaimed gospels by whatever strong or weaker correlation to a series of instances while ignoring those that could pose as potential challenge pervades the institution of learning in this field.

However the very nature of results, of smaller or greater economies, in every area whether micro or macro, leaves us with a sobering thought that it is only through the unbiased approach of humble scrutiny that any further progress can be made. To take Economics to the level of a scientific subject, it is left to the economists and to the display of their inherent behavior as human beings that can subject themselves to the skepticism and to the humility that comes with it.

The recent saga of Reinhart and Rogoff that started off with a benign inquiry into debt levels and their impact on growth later subsumed the community into a debate whether the presentation of data and the statistics embedded in the research had more to do with the inference than the method used to arrive at the inference. The anchoring on 90% debt levels attracted the obvious scrutiny and we know the eventual outcome. It leaves us with a realization that we need to move towards less of definitive inferences and rather resort to probabilistic estimates of a best fit and leave the next researcher to get closer to the truth. The world is getting more complex and there is always some information that we are missing, it is time we all realize that.

Procyon Mukherjee

22nd April 2013

Reversion To Skepticism And The “Gradual Increase Of Assurance”, To Get Nearer To Truth: Lessons From The Reinhart & Rogoff Saga

Leave a Reply

Be the First to Comment!

Notify of