jump to navigation

Definition of the Situation April 24, 2014

Posted by larry (Hobbes) in Culture, Frame Analysis, Philosophy, Psychology.
1 comment so far

The concept of the definition of the situation originates with William Isaac Thomas, possibly in Thomas and Florian Znaniecki’sThe Polish Peasant in Europe and America (1918-1920).  The concept was given new vigor and poignancy in the fifties by Erving Goffman in his study of roles in face-to-face social interaction in Presentation of Self in Everyday Life (1957).  Goffman agreed to a significant extent with what has become known as the Thomas theorem (1928): “If men define things as real, they are real in their consequences”.

Most situations bring their histories along with them.  To do well in a situation, it pays to know something of its history.  And to successfully redefine a situation, it may be essential to know that situation’s history.  Sometimes the history can be daunting and intimidating thereby rendering the situation daunting and intimidating.


The concept of the definition of the situation is closely related to frame analysis, although this relationship is not always explored, to the detriment of both approaches, I think.

Varoufakis on corrupt statistics in respect of Greece debt April 24, 2014

Posted by larry (Hobbes) in Abuse of power, economics, Jefferson Airplane, Statistics.
add a comment


Varoufakis begins his post with this graphic, which reminds me of a song by Jefferson Airplane. Here they are appearing on The Smothers Brothers show. A hilarious TV program. It begins with White Rabbit but continues with (Do You Want) Somebody to Love whose first line is “When the truth is found to be lies”. The following line: “All the joy within you dies”. Sometimes I wonder whether members of the elite and their economic lackeys have been taking some of Alice’s pills.

[In case the video is not shown, here is the link: https://www.youtube.com/watch?v=hnP72uUt_pU]

Varoufakis argues that the statistics from Eurostat that show that Greek debt isn’t as great as it was thought to be is little more than a political ploy by the Euro elite in order not to upset the upcoming European elections in May. It is a short piece. You can read his argument for yourself.


Happy listening and reading.

Bank of England governor admits central MMT axiom April 24, 2014

Posted by larry (Hobbes) in economics, MMT, money.
add a comment

I gave a presentation recently at a Radical Statistics conference where one of my contentions was that money was nothing more than a legally binding promissory note, or as Randall Wray like to refer to it, an IOU, much like it states on US bills themselves. Now, we have Carney, the Candian BoE governor, saying exactly the same thing publicly.

The Bank of England, as reported by David Graeber, finally admits publicly what I contended in my talk at the RadStats conference, that the UK is not finance constrained and that money is just a legally binding promissory note [or as he says, an IOU]. As Graeber notices, this completely contradicts Osborne’s justifications of his austerity program, which it appears he may mention again in his budget statement tomorrow along with further comments about the [nonexistent] recovery. Labour’s problem is that There Is An Alternative [TIAA], MMT, which would completely undercut Osborne’s entire economic program, but they continually fail to utilize it.

Here is the link.


Update: it seemed that Osborne might mention what the BoE governor said about the nature of money in a contemporary economic system, but he didn’t. This is not surprising, as to do so would have seriously undermined his entire economic program. He and Cameron have since then, for what would seem to be obvious reasons, combined to form a double act extolling what they claim is an economic recovery. That there is no such thing, I take here as read.

Lakoff on his own frames April 24, 2014

Posted by larry (Hobbes) in economics, Frame Analysis, George Lakoff.
add a comment

Harry Feldman has brought to my attention that in terms of his reading of Lakoff, Lakoff fails to appreciate that he is speaking from within a particular frame and seems to be using it unanalytically. Quoting Harry directly, according to him, Lakoff

seems oblivious to the frames underlying what he himself says – that there is something he calls ‘American values’, that the Democrats are somehow ‘progressive’, that there is something natural about a ‘family’, nurturing or otherwise… Ultimately, the message appears to be that if ‘progressives’ are to defeat ‘conservatives’, they have to play by the conservatives’ rules by jettisoning rational argument and appealing to their perceived audience at a visceral level. It’s not at all clear how this differs from dishonesty.

To be frank, the phrase ‘American values’ puts me off some of Lakoff’s stuff. Which is why I have stuck with Goffman and others on this. (Perhaps it is apposite, parenthetically, to point out that we are not limited to Lakoff’s approach to frames. In addition to Goffman, there is also Kahneman and Tversky’s Choices, Values, and Frames,which I heartily recommend. Not all the articles in that volume are relevant to our concerns here, but some are.)

Harry’s point about Lakoff’s obliviousness seems well taken, but I alter L’s approach by marrying the two, the so-called emotional and the evidential, including the general framework. Or try to anyway. For Lakoff’s  emotional, however, I prefer the term, conceptual, and its cousin, conceptualization, which are both more general and more specific. Since data isn’t value-free, that is, comes with conceptual baggage and, as Lakoff would no doubt point out, a concomitant emotional commitment, in the fight we have at hand against the neoclassical economic worldview, it is not going to be sufficient to attack the data alone. The framework in which the data has been encased must also be undermined. (This is, I think, part of Galbraith’s point about Piketty’s interpretation, or framing, of his data.)

Balls, who I think is a good example of how not to do things, had a piece in the Guardian, on 14 April, and a more vacuous set of utterances I have yet to see. All fluff and no substance. Here is the link: http://www.theguardian.com/commentisfree/2014/apr/14/labour-party-cost-of-living-crisis

If I may, possibly, over-generalize just a tad, philosophers of a certain stripe think the best way of disarming the other guy is to show that his argument leads to contradiction, while certain psychologists think the best way of disarming the other guy is to undermine his value assumptions. Both approaches, however, require some sort of “factual” information to work with or they won’t fly. With Balls there is little or nothing to work with. And, in addition, he assumes that the other guy’s framework is the way to “frame” the debate. There is a way of doing this that might work but Balls isn’t doing that. In the case at hand, the attack on the NHS, the benefit “reforms”, the general attack on the poor, and the like, Balls has so far discussed the situation in ways laid out by the Tories. He needs to re-frame the debate. But since he basically accepts the way in which they have laid out the terms of the debate, he may be incapable of doing that.

Since their view of the economy, or Osborne’s anyway, is at the root of everything they are doing, it has not been sufficient to attack their lack of evidence for the policies and programs they have introduced. The way they view the economic system must also be shown to be completely mistaken. This involves a conceptual reorientation, or as Lakoff would say, a re-framing of the debate, root and branch. If this can be done successfully, the data themselves can be seen in a new light and acquire new relevance. While you can’t do without data or evidence, more is needed, and this is why I say: data isn’t enough.

Only by marrying the conceptual with the evidential can Osborne’s way of viewing society and how it works be placed in the coffin where it belongs and buried in the ground along with the rest of such conceptual detritus. Perhaps we may then regain some control of the debate.

I realize that I haven’t directly addressed Lakoff’s way of doing things. But I hope I have shown how we can avoid the pitfall that you point out may lie in Lakoff’s path by altering the frame framework (!?) in certain ways. Goffman and Kahneman and Tversky’s approaches aren’t so subject to the pitfall in question.

As an aside, related to all this is the notion of the “definition of the situation”, so well discussed by Goffman so many years ago. While highly relevant, this must be left for another occasion.

The Commercialization of Everything April 22, 2014

Posted by larry (Hobbes) in economics.
add a comment

George Monbiot has brought up once again the essential issue of the commercialization of the environment, which is part and parcel of the neoclassical economic worldview. In his discussion, he mentions an interview of George Lakoff done by the Guardian last February. Lakoff states outright why he thinks the Liberals lose and will continue to lose every time against the conservatives. I will not summarize the arguments of either, but Lakoff’s argument is more important than ever, and the Labour Party and the TUC and others are failing to get it. It isn’t about facts or money, it is about moral perspective. Whether you agree with Lakoff or not, it is now essential to engage with his argument.



The interview refers to Lakoff’s book, Don’t Think of an Elephant: The Essential Guide for Progressives (1990). Since Lakoff’s argument relies on what are known as frames, ways of framing a point of view or an argument, I think it apposite to mention one of the first works in this area, that of the late Erving Goffman’s Frame Analysis (1974). It is unlikely that Lakoff is not aware of the book, but Monbiot has never mentioned Goffman with respect to this issue. I have not myself read this particular book of Lakoff’s, but if you have read Goffman, you may not need to.

Frame analysis, which can be tricky, is quite straightforward. It isn’t going to be sufficient for Miliband & Co. to have read Piketty’s Capital, they had better read Lakoff in addition and frame their ethical stance allied to the data without worrying about what the Tories are saying. It is data married to ethics that is required. Not simply pointing out the other chap’s empirical errors. These errors must be placed in an appropriate frame of reference.

Balls’ thinking has been captured by the alternative party’s position, as shown by virtually every statement he makes. He just tries to make it more acceptable. It seems doubtful to me that he can change. But Miliband might have a chance, if he can make himself cognitively and emotionally independent of those around him. I realize that this is nontrivial but I don’t know what else can really work in turning around the cultural capture of the economic system and its ancillary institutions that has taken place by adherents of the neoclassical worldview over the past 30 or so years. For this set of vested interests, for whom data is an irrelevance, if you have seen one Redwood tree, you’ve seen them all (attrib. Reagan). A more dangerous perspective is difficult to imagine.


Utilizing frame analysis, one could argue that Galbraith’s criticism of Piketty lies in what Galbraith conceives is a poor choice of frames of reference for analyzing and even cataloging his data. If Galbraith’s critique is accepted, this need not vitiate Piketty’s entire analysis, only that care must be used in assessing some of his conclusions and possibly some of the operations he carries out on his data. Whatever its faults, Piketty’s analysis is of greater worth than the dynamic duo’s upcoming sideshow on the so-called British economic recovery. Claims adduced will be either mendaciously inaccurate or at best misleading. Piketty is at least attempting to unravel what might be conceived to be the truth surrounding the issue of economic inequality.

Piketty’s new book, Capital in the 21st Century April 22, 2014

Posted by larry (Hobbes) in economic inequality, economics, Piketty.
add a comment

Thomas Piketty’s new book, Capital in the 21st Century, is making more headlines than one might have thought possible for an economics text. Although it is accessible for someone with little economics training, it is data driven in a big way. The book is an historical account of economic inequality in the West over the past few hundred years. The most important period for our current purposes is the latter 19th century in the US. During that period, you had a similar concentration of wealth (i.e., assets) as is taking place now, but without the seemingly more or less complete capture of the political system that appears to be taking place now, at least in the US. This capture is one of the features of extreme wealth inequity that bothers Piketty.

On a biographical note, Piketty was at MIT and left that institution for Paris. He has said that one of the reasons he left was that he didn’t think American economists were that good, in general. UMKC and the University of Texas at Austin are two centers of heterodox economics in the US that are excellent. But Piketty wasn’t there. Then again, the French offer might have been too good to ignore.

Paul Krugman, not the most original or heterodox economist on the planet, was interviewed by Bill Moyers. Krugman makes his usual mistakes on money creation in the economy, but his remarks about Piketty’s results deserve to be heard. Don’t miss Moyers’ remarks following the interview. They are worth listening to as well. In this interview, Krugman admits that he never realized what Piketty has been researching and found out, with Saez and others, for the past 10 years or so. Some Krugman critics will probably not be surprised. But, at least, he has admitted that he didn’t know. As probably most know, Piketty’s book is about economic inequality. And it is a big deal. One could argue that it is the source of all the other inequities to which almost everyone is subject, except for the 1% or the .1% who are getting wealthier all the time, every minute of every day. The disparity between them and the rest is, it could be said, astronomical.

[I have included the hyperlinks in case the embed codes do not work as they are supposed to, which will be the fault of WordPress.]


Moyers refers to this study of US inequality and the emerging oligarchy by Gilens and Page. The study is entitled Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens (April 2014).


Here is Krugman’s review of Piketty from the NY Review of Books.


But here is an interview of Piketty himself by CNN.


A colleague of mine, Julian Wells, reminded me of James Galbraith’s review of Piketty’s book which is eminently worth reading, although it is technical in places. (http://www.dissentmagazine.org/article/kapital-for-the-twenty-first-century) Julian makes the following points, which are worth noting.

Jamie G. marks Piketty down for a number of errors, of which the one that resonated most with me was his complaint that Piketty has what my friend Andrew Kliman has dubbed a “physicalist” perspective — essentially the misconception that capitalist production is at bottom about producing things, whereas what it’s actual objective is is the production of capital.

Worse still, Galbraith says, Piketty’s attempt to measure physical capital is also incoherent (essentially because any one-dimensional measure must be a financial one).

Naturally various further errors in analysis flow from the above, and culminate in utopian policy prescriptions of a social democratic kind.

Unfortunately, I neglected to mention it along with Krugman’s comments. Yes, one must distinguish between Piketty’s data and the various levels of interpretation and explanation that he imputes to it. I don’t know of anyone who rejects the most important implication of his data, that of wealth (i.e., assets) tending to concentrate in the hands of a few. Even Galbraith himself talks of a new class war, although this “war” is not between the classes of industrialists and workers as in Marx’s day but between the 1% and everyone else, though he doesn’t express it quite like that in his The Predator State.

Galbraith’s review of Piketty’s book is the most critical review I have yet to come across and Julian is right to point out its importance to this debate, which has now entered the public arena, at least to some degree. Gilens and Page in their study, whose methodology is not the greatest, has obtained data that also supports this contention of wealth concentration, preferring, however, to calls this phenomenon “economic elite domination” rather than “oligarchy”, which John Cassidy in The New Yorker  (Is America an Oligarchy?) contends is due to the former term being less pejorative in its connotations.

Of course, such wealth concentration is not new. It took place toward the end of the 19th century. When asked by Moyers why that didn’t lead to the result to which it seems to be leading today, Krugman contends that this was because the economy as a whole was growing so fast that the wealthy could not “get a lock” on the system. That is, wealth and its concomitant concentration was not growing as fast as general GDP. Which supports Piketty’s point in this respect, that wealth is currently growing faster than general GDP – i.e., it is outpacing it. Piketty’s remedy for this is progressive taxation, in this case, taxation used for redistributive purposes, as opposed to its other purposes (currency legitimation, influencing the direction and degree of spending, etc.).

However, even J. P. Morgan’s saving of the banking system from itself in the crash of 1907 could not have been achieved without the assistance of Teddy Roosevelt (the “Great Trustbuster”), i.e., the government, something Morgan himself at the time, and bankers since, prefer to forget. But a good, brief discussion of this can be found in Nomi Prins’s All the President’s Bankers.

The upshot seems to me to be, whether one agrees entirely or not with Piketty’s own interpretation of his data, is that he is bringing empirical data on economic inequality and its debate into the public arena and public discourse and forcing even economists like Krugman to face what many now are considering to be a “fact”, that economic inequality is an objective reality and not simply a matter of where one stands ideologically*.

Even Cameron appears to feel he has to respond to the influence of Piketty’s work, although his response, so far, has been extremely weak and vacillating. Rumor has it that the Miliband camp is reading and digesting the book.

Nevertheless, as Galbraith points out, one must be careful about Piketty’s own treatment of his data. Is this a subtlety that might perhaps escape the Cameron and Miliband camps, neither of which has so far conspicuously exhibited this particular trait in any substantial degree?


* I realize that my argument here is enthymematic and thereby incomplete, but to go into it further here would take us too far afield.

Piketty & the recent US Supreme Court decision April 3, 2014

Posted by larry (Hobbes) in economics, Justice, Philosophy.
add a comment
Uh oh!
It is looking worse for the 99% every week in the US, and eo ipso potentially in the UK. The piece linked to is by a law prof at University of Colorado at Boulder. Piketty’s Capital in the 21st Century gets a mention in this context, and it is relevant. I will quote from Campos: Piketty argues that

“…[I]n the long run, the return on capital tends to be greater than the growth rate of the economies in which that capital is located.

What this means [concludes Campos] is that in a modern market economy the increasing concentration of wealth in the hands of the already-rich is as natural as water flowing downhill, and can only be ameliorated by powerful political intervention, in the form of wealth redistribution via taxes, and to a lesser extent laws that systematically protect labor from capital. (Piketty argues that, because of historical circumstances that are unlikely to be repeated, this sort of intervention happened in the Western world in general, and in America in particular, between the First World War and the early 1970s.)”

This is as pessimistic as it gets. What is not mentioned in most discussions of this outcome is the view put forward a few years ago by Republicans that corporations are individuals, that is, people. This is a logical howler of immense proportions, but in this context, such fine distinctions seem to be irrelevant. It is this view that partially underpins the Supreme Court decision that,  just like people, corporations have free speech rights that need protection. This apparently includes the way that their CEOs and board members spend the corporation’s money.

Piketty’s data is extensive and runs for hundreds of years. It is a door-stopper of a book, around 600 pages of text and 100 pages of notes, not including the material Piketty has placed online. One could be forgiven for concluding that the greater the economic inequality, the greater the chance of political plutocracy and that this is the inevitable consequence of the political implementation of neoclassical economic principles. This relationship seems to me to inevitably entail that politics can not be divorced from economics, which is a central tenet of the neoclassical paradigm. It seems to follow, therefore, as the night the day, that since politics can never be value-free, the idea that economics can be, as claimed by the neoclassical econs, is a non-starter.

I would like to think that this constitutes another nail in the coffin of the neoclassical paradigm, but it doesn’t look like it. Despite the mounting evidence against it, the dominant economic paradigm rumbles on with hardly a hesitation along the way.

Also have a look at this and the links therein: http://www.slate.com/blogs/moneybox/2014/04/02/wealth_inequality_is_it_worse_than_we_thought.html. It is about new research by Saez and Zucman on US wealth disparity. As has been pointed out previously by researchers, it isn’t the 1% who are the biggest gainers from this financial crisis, it is the 0.1%, the 1/10th of 1%.

Varoufakis on game theoretic analysis of asymmetric expectations among the powerful & the powerless March 22, 2014

Posted by larry (Hobbes) in economics, Game Theory, Psychology, social justice.
add a comment

This is for those who have some experience, knowledge or interest in game theoretic analyses of social interaction. Varoufakis is a political economist who is intimately familiar with game theory, having written a book about it with Hargreaves Heap some years ago. Nevertheless, economically, this research falls within the domain of what has become known as microeconomics, as distinct from macroeconomics. But this research focuses on social group expectations, not the processes of the economic system as a whole.

There are links in the paper to more detailed and mathematical discussions of the issues. I don’t think that the PowerPoint slides are self-explanatory, as they assume more understanding of the mathematics underlying game theory than is perhaps possessed by the general population. But the conclusions Varoufakis reaches require no math to understand. They are:


There are a number of books and articles dealing with applications of game theory to biological situations, one famous one by the late John Maynard Smith, Evolution and the Theory of Games, mentioned by Varoufakis. I think I should point out that Richard Lewontin, the population geneticist, has written that he thought that the appropriation of game theory to evolutionary contexts altered the character of the theory to such an extent that it ought to be called something other than “evolutionary game theory”. History and social convention have made the decision for him. The appellation has stuck.

To be fair to Varoufakis, he doesn’t think that these results are extraordinarily original, though they would seem to be unknown to mainstream economists, whose theoretical paradigm Varoufakis is concerned to attack. And he feels that one of the best ways of doing this is from within, as it were, a tactic utilized successfully by logicians for centuries to attack positions they don’t (didn’t) like. (Have a look at the way Socrates, as set out by Plato, operates.)

Here is the link to a non-technical discussion of this topic, including links mentioned.



Getting back to my simple example of being able to breathe, we know that the presence of oxygen is a necessary though not sufficient for breathing. We also know that the presence of CO2 is a necessary but not sufficient condition for breathing.

So, here we have two necessary conditions for breathing. For B for breathing, O for oxygen, and C for CO2, we have (if B, then O) and (if B, then C). Since we know that they must occur together for a person being able to breathe, we have (if B, then C & O).

Realizing that the system under discussion is more complex than this discussion, can we nevertheless go on to contend that the presence of oxygen and CO2 are individually necessary and jointly sufficient for breathing? That is, that B iff C & O? How should we then interpret this equivalence? As a kind of “law of breathing”, a kind of scientific regularity, or as a definition of the conditions of being able to breathe?

This question may seem to be a triviality, but it arises in the discipline of macroeconomics all the time. When is a statement to be interpreted as a substantive assertion that has a truth-value or as a definition of terms, which has no truth-value but is only useful or not? Too many economic discussions are not at all clear about this issue. And it can make a difference to how you treat what they are saying.

Necessary & Sufficient conditions: A Medical Example March 22, 2014

Posted by larry (Hobbes) in Logic, Medicine, Science.
add a comment

A sufficient condition A for B is one where A being the case is sufficient for bringing about B.

A necessary condition B for A is one where if B is not the case, then A won’t be either.

Logically it looks like this. A is sufficient for B: if A, then B.

B is necessary for A: if not-B, then not-A.

Ex. of a necessary condition: oxygen (B) is necessary for being able to breathe (A). Therefore, if not-B, then not-A.

It is easier to come up with necessary conditions than it is sufficient conditions. For instance, what is sufficient for being able to breathe?

A set of necessary and sufficient conditions for A is often considered to be equivalent to, or for, A.

A slightly more realistic and complicated way of expressing this set of relationships is this. A is sufficient for D and B is necessary for D. This translates to (if A, then D) and (if D, then B). The contrapositive of each yields (if not-D, then not A) and (if not-B, then not-D). It is relatively clear, I think, that A and B each have a distinct relationship to D (I am ignoring the issue of transitivity illustrated in the example.). The potential complexity of this relationship is borne out is the following medical example from research into the dementias.

Here is a quote from a medical investigation of causes of Alzheimer’s and other dementias (from Daily Kos).

“Researchers have found that a protein active during fetal brain development, called REST, switches back on later in life to protect aging neurons from various stresses, including the toxic effects of abnormal proteins. But in those with Alzheimer’s and mild cognitive impairment, the protein — RE1-Silencing Transcription factor — is absent from key brain regions.”

“Our work raises the possibility that the abnormal protein aggregates associated with Alzheimer’s and other neurodegenerative diseases may not be sufficient to cause dementia; you may also need a failure of the brain’s stress response system,” said Bruce Yankner, Harvard Medical School professor of genetics and leader of the study, in a release.”

While the situation is more complicated than the simple example I gave initially, the logic is the same.

From the quote, we have: protein aggregate A; failure brain stress response system B; absence of RE1 (=R); dementia D.

Second paragraph of the above quote may be saying that A & B is sufficient for D.

But from the first paragraph, we also have absence of R (RE1) as a necessary condition for D. I.e., if D, then not-R (absence of RE1).

So, A&B is sufficient for D, hence (if A&B, then D). But, not-R is necessary for D. Or, equivalently, (if R, then not-D). I.e., R is sufficient for not-D.

Similarly as in the simple example: A, B, and R are related in a complex way to D, a relationship that is not entirely spelled out in the quote.

We are, therefore, left with an important question: what is status of A & B with respect to R? Is R a component of either A or B? The quote doesn’t link all these factors together. While this may be obvious from the quote, the logical situation underlying this hiatus may not be clear. Hopefully, it now is and also clearer what additional relationships need to be explored in order to lead to better understanding of the dementias, particularly Alzheimer’s, and thereby better control of their onset and progression if not complete defeat.

Bayes’ theorem: a comment on a comment March 10, 2014

Posted by larry (Hobbes) in Bayes' theorem, Logic, Philosophy, Statistics.
add a comment

Assume the standard axioms of set theory, say, the Zermelo-Fraenkel axioms.

Then provide a definition of conditional probability:

1. BayesEq1, which yields the identity

1a. BayesEq2, via simple algebraic cross multiplication.

Because set intersection is commutative, you can have this:

1b. BayesEq3, which equals

2. BayesEq4.

What we have here is a complex, contextual definition relating a term, P, from probability theory with a newly introduced stroke operator, |, read as “given”, so the locution becomes, for instance, the probability, P, of A given B. Effectively, the definition is a contextual definition of the stroke operator, |, “given”.

Although set intersection (equivalent in this context to conjunction) is commutative, conditional probability isn’t, which is due to the asymmetric character of the stroke operator, |. This means that, in general, P(A|B) ≠ P(B|A). If we consider the example of Data vs. Hypothesis, we can see that in general, for A = Hypothesis and B = Data, that P(Hypothesis|Data) ≠ P(Data|Hypothesis).

Now, from the definition of “conditional probability” and the standard axioms of set theory which have already been implicitly used, we obtained Bayes’ theorem trivially, mathematically speaking, via a couple of simple substitutions.


Or the Bayes-Laplace theorem, since Laplace discovered the rule independently. However, according to Stigler’s rule of eponymy in mathematics, theorems are invariably attributed to the wrong person (Stigler, “Who Discovered Bayes’ Theorem?”. In Stigler, Statistics on the Table, 1999).

Now, since we have seen that Bayes’ theorem follows from the axioms of set theory plus the definition of “conditional probability”, the following comments from a recent tutorial text on Bayes’ theorem can only be interpreted as being odd. The following quote is from James V. Stones’ Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis (3rd printing, Jan. 2014).

If we had to establish the rules for calculating with probabilities, we would insist that the result of such calculations must tally with our everyday experience of the physical world, just as surely as we would insist that 1+1 = 2. Indeed, if we insist that probabilities must be combined with each other in accordance with certain common sense principles then Cox (1946) showed that this leads to a unique set of rules, a set which includes Bayes’ rule, which also appears as part of Kolmogorov’s (1933) (arguably, more rigorous) theory of probability (Stone: pp. 2-3).

Bayes’ theorem does not form part of Kolmogorov’s set of axioms. Strictly speaking, Bayes’ rule must be viewed as a logical consequence of the axioms of set theory, the Kolmogorov axioms of probability, and the definition of “conditional probability”.

Whether Kolmogorov’s axioms for probability tally with our experience of the real world is another question. The axioms are sometimes used as indications of non-rational thought processes in certain psychological experiments, such as the Linda experiment by Tversky and Kahneman.  (For an alternative interpretation of this experiment that brings into question the assumption that people either do or should reason according to a simple application of the Kolmogorov axioms, cf. Luc Bovens & Stephan Hartmann, Bayesian Epistemology, 2003: 85-88).

A matter of interpretation

In the discussion above, the particular set theory and the Kolmogorov axioms mentioned and used were interpreted  via the first-order extensional predicate calculus. This means that both theories can be viewed as not involving intensional contexts such as beliefs. The probability axioms in particular were understood by Kolomogorov and others using them as relating to objective frequencies and applicable to the real world, not to beliefs we might have about the world. For instance, an unbiased coin and die, in the ideal case admittedly, are considered to have a .5 and 1/6 (or .1666) probability for the side of the coin and a side of a six-sided die, respectively, on each flip or throw of the object in question. In these two particular cases, it is only via behavior observed over a long period of time that can produce data that will show whether in fact our assumption that the coin and the die are unbiased is true or not.

Why does this matter. Simply because Bayes’ theorem has been interpreted in two distinct ways – as a descriptively objective statement about the character of the world and as a subjective statement about a users’ beliefs about the state of the world. The derivation above derives from two theories that are considered to be non-subjective in character. One can then reasonably ask: where does the subjective interpretation of Bayes’ theorem come from? Two answers suggest themselves, though these are not the only ones. One is that Bayes’ theorem is arrived at via a different derivation than the one I considered, relying, say, on a different notion of probability than that of Kolmogorov’s. The other is that Bayesian subjectivity is introduced by means of the stroke (or ‘given’) operator, |.

Personally, I see nothing subjective about statements concerning the probability of obtaining a H or a T on the flip of a coin as being .5 or that of obtaining one particular side of a 6-sided die being .166. These probabilities are about the objects themselves, and not about our beliefs concerning them. Of course, this leaves open the possibility of alternative interpretations of probabilities in other contexts, say the probability of guilt or non-guilt in a jury trial. Whether the notions of probability involving coins or dice are the same as those involving situations such as jury trials is a matter for further debate.

%d bloggers like this: