jump to navigation

Austerity, economic inequality, & social unrest dance together in an intertwined destiny August 15, 2011

Posted by larry (Hobbes) in economics, interpretation, Justice, social policy.
add a comment

There have been a number of discussions on the possible relations there might be among economic inequality, social unrest, and the economic austerity programs that are so popular with Western governments. Let me begin by listing articles that take slightly different interpretations to this complex issue so that there is a shared information base on the relationship between austerity and social unrest.

Hutton & Porter: On the UK riots in Sunday’s Observer;

Washington’s blog: Austerity and runaway inequality lead to violence;

Polly Toynbee: Moral outrage at rioters fixes nothing;

Larry Elliott: We”ve been warned: the system is ready to blow.

All of the above provide an explanatory context for the riots that are not embedded in an explicit political-economic context. Elliott, as economics editor of the Guardian, states explicitly that

“Lesson number one is that the financial and social causes are linked. Lesson number two is that what links the City banker and the looter is the lack of restraint, the absence of boundaries to bad behaviour. Lesson number three is that we ignore this at our peril.”

Bill Mitchell (a proponent of Modern Macroeconomic Theory (MMT), which is a descendant of Keynes-Kalecki, Lerner, Kaldor, and Minsky, with a little Marx thrown in) provides the basic outlines of an explanatory context for the discussions of Elliott and the others by means of a question he asks and the answer he provides in a recent Saturday quiz.  I wish to explore the context of Mitchell’s question and answer in some detail in respect of the relationship between the economy and social unrest.

Mitchell asks whether the claim that in the UK, unemployed youthful rioters on income benefit are living off the hard work of taxpayers is a valid one, and answers in the affirmative.

Importantly, Mitchell refers to Abba Lerner’s The Economics of Employment (1951) where Lerner employs the metaphor of the economic steering wheel as part of his argument against laissez-faire conceptions of the economic system. Lerner’s economic steering wheel refers metaphorically to an automobile going down the road where, in a laissez-faire attitude to driving, the driver takes his hands off the wheel, whereas where more control is needed, such as in a contemporary complex economy, the driver needs to keep his hands on the wheel.

In his preface, Lerner emphasizes that – contrary to expectations of his friends and colleagues – his book does not focus on unemployment or on full employment per se. As he explains,

“I was expected to write a book devoted to attacking the evil of unemployment or to write a book indicating how one could achieve the desirable state of full employment. I think I have done both of these things, but neither can be done properly if primary attention is directed at what we want to avoid or at what we want to achieve. Primary attention must be directed at understanding and explaining the way things work [my emphasis]. Understanding comes first. Only when we understand the nature of the machinery that determines any level of employment can we hope to be able to avoid what we do not like and achieve what we do like.” (1951: vii)

This is precisely Mitchell’s epistemic position; it is also mine. And it is explanatorily the most effective for understanding what has been happening the past few years. I am emphasizing Mitchell’s contention in some detail, both because Mitchell’s argument poses such a good counterpoint to the other articles I have listed and it presupposes a certain view of economic processes that may not be widely understood.

The most important point that Mitchell makes here is that non-workers on benefit obtain command over real goods and services and that because these are finite at any given time, this leaves fewer resources left for those working to have command over themselves.  The arguments against providing such benefits to the poor rely on confusions between the nominal resources implemented by the government accounting system and real goods and services.  This is related to another essential point: where the funding for those on benefit comes from.  The answer is: nowhere.  In other words, workers do not lose out as a consequence of non-workers receiving this kind of funding.

How does this work? The funding is created ab initio by the Treasury and placed in the accounts of those receiving such benefits.  This enables them to have access to real goods and services, notably food, clothes, and housing.  It does not mean that they can thereby afford luxuries such as yachts.  Since this funding is created ab initio by the Treasury, it does not come out of worker taxes.  So, in a real sense, workers are not paying for the benefits provided to non-workers. The conservative position can be summed up in this way: what the lazy get the industrious must forego, or put another way, what the industrious create the less privileged can access to the advantage of all. As Lerner and Mitchell would be keen to point out, this is not how the system actually works:

Except in the case of full employment, government funding for non-workers has no economic effect on workers, however psychologically affected workers might be by what appears to be a disproportionate disparity between the two groups, which is a consequence of the group members accepting some type of neoclassical orthodoxy.  To contend that such funding of non-workers does have an economically deleterious effect on workers and their working environment is to implicitly assume that this activity is in effect a zero-sum game, which is to implicitly imply that what one group gets the other gives up.

Were a societal system able to provide work for everyone who wants to work but is under a serious resource constraint, such as limitations of a finite resource, the provision of benefits to certain groups of non-workers might cause social problems.  But we are not living in such a generally resource constrained system. So this putative causal link has no general application.  In general, any resource constraints that exist are politically created, and thus not a consequence of economic principles.

Another important issue here is the function of taxes. Taxes are not used by governments to fund programs.  They are used to constrain spending.  So, a large tax on the rich will mean that fewer hundred million dollar yachts will be purchased and a lesser degree of malicious speculation might take place as well.  This can only be a good thing.  Since taxes are not used to fund spending, they are not used to finance benefit payments. Hence, the workers are not paying for the benefits accorded to non-workers through their taxes.  It would be equally correct to say that they are not paying for them at all.  All that the system is doing is decreasing the economic inequality between the rich and the poor.  Only in a societal system with seriously constrained resources would benefitting the poor effectively disadvantage the better off.

A question arises.  If the rich are not effectively disenfranchised by helping the poor, why are they so opposed to it taking place?  This requires a deeper response than I can provide here. But let me say this. The issues Mitchell raises provide an economic context for assessing the social and psychological impacts of inequality and austerity. They allow a differentiation between what is and is not relevant in the assessment of putative social policies designed to deal with economic crises.  One conclusion that could be drawn is that the gross inequities enjoyed by the rich are so socio-culturally divisive that a more equitable redistribution of economic resources would be best for society as a whole, and that would include the rich.

Does this invalidate the positions of the other authors I have listed above?  Not at all.  It complements them via a deeper interpretive and explanatory political-economic framework.  In the case of the social unrest with which we began, the perspective set out by MMT enriches the explanatory scenarios developed by the other authors.  In particular, MMT principles imply that greater equity in a less divisive socio-cultural context should lead to a decrease in the probability of social unrest of the sort we have recently witnessed.  After all, the distinction between haves and have nots becomes more a question of having a little less vs. having a little more, where the socio-economic distance between the various groups can be significantly diminished.  History shows that nicely equable distributions such as these are highly unlikely to take place or, if they do take place are inherently unstable, unless the government puts its hands on the economic steering wheel and exerts some degree of control over the dynamic system that is the political economy.

Logic of conditionals re Bill Mitchell’s quiz May 4, 2011

Posted by larry (Hobbes) in economics, Goedel, interpretation, Logic, material conditional.
add a comment

In the most recent quiz by Bill Mitchell (http://bilbo.economicoutlook.net/blog/?p=14313&cpage=1#comment-17186), there was a brief discussion that did not clarify the problem under discussion.  This problem was the nature of the logical of conditionals, in particular, the logic of the material conditional.  Part of my response was not quite relevant and I would like to clear up any confusion I may have created here.

The problem was on how to interpret a conditional assertion made by Bill Mitchell.  The commenters who dealt with this problem were Tom Hickey and MamMoTH.  Mitchell asked the following question and whether it was true or false. In claiming Q 3 was false, Mitchell went on to show why this was the case without explicating the logic of the situation, no doubt because he thought it was obvious.

The question was:

If the stock of aggregate demand growth outstrips the capacity of the productive sector to respond by producing extra real goods and services then inflation is inevitable.

He then claimed that assertion was false.  I am not here concerned with whether this assertion is false but with the logic of the situation.  MamMoTH said:

According to the rules of logic, the correct answer to question 3 is True because False implies whatever is True.  … @Tom, not sure what you mean, but a quick check on entailment with wikipedia to brush up some concepts shows that if S1 is inconsistent then S1 entails whatever.

Tom Hickey followed this with:

This is true of material implication but not formal implication (entailment). Most ordinary language arguments based on conditionals presume entailment.

While Hickey is right that a good many natural language arguments presuppose that the conditional being used is that of some kind of entailment, what MamMoTH says is not entirely right in the context of the material conditional.  He appears to be confusing a logic of the conditional and a consequence of the definition of material implication with something else, perhaps Goedel’s incompleteness result.

In standard extensional (truth-functional) logic with the material conditional, the conditional is defined in such a way that ‘A implies B’ is true whenever A is false or B is true. All standard extensional systems are like this. However, in a non-standard logical system, particularly one with relevance conditions attached, this property is undesirable. Unfortunately, the term ‘entailment’ is loosely used by many, but the locus classicus for a logical system of entailment is Anderson and Belnap’s Entailment: The Logic of Relevance and Necessity. There are also logical systems in which contradictions, under certain conditions, are acceptable, but these are non-standard, too.

It seems to me that, in stating that ‘if S1 is inconsistent, then S1 implies whatever’, MamMoTH may have been implicitly referring to Goedel’s incompleteness theorem or a consequence of extensional logic with the material conditional, to wit, that if p & not-p then q, that is, that anything follows from a contradiction.  I am assuming that what is meant by ‘if S1 is inconsistent’ is actually meant ‘if S1 is self-inconsistent’, for if S1 were self-inconsistent, then S1 might be of the form p & not-p, a logical contradiction.  For if this were so, then S1 implying whatever would follow from its self-inconsistency.  Simple inconsistency is not enough to obtain this result.  Consider S1 = A or (B & not-B).  Here a portion of S1 contradicts itself but S1 itself is not logically contradictory in the sense that an assignment of values to the variables of S1 will inevitably render S1 false.  Since B & not-B is always false, then the value of S1 depends solely on A, which is not itself of the form, p & not-p.  Therfore, ignoring B, if A is true, S1 is true, while if A is false, S1 is false.  Is this happenstance sufficient to thereby render S1 inconsistent and thereby imply an arbitrary proposition q?  If so, this is not the standard meaning of the notion of inconsistency.

Let us go on to the second matter. It is not true that anything follows from a falsehood, while it is true that anything follows from a logical contradiction, that is, a proposition that is always false.  This situation can be confused with the definition of the material conditional.  The material conditional is defined in such a way that ‘A (materially) implies B’ is false if and only if (iff) A is true and B is false.  In setting up any deductive logical system, it is essential that no true statement ever lead to a falsehood.  However, in the case of the material conditional, a falsehood leading to a falsehood is considered to be a true ‘material’ implication, though not generally in a logic of relevance or entailment.  That is, ‘F implies F’ is true when ‘implies’ is material implication.  There are good extensional mathematical reasons for this.  However, a justification can be made that it should be the case that falsehoods should follow from falsehoods.

A problem arises because for material implication, ‘F implies T’ is also true, which appears to be unacceptable.  Nevertheless, a justification can be given.  One can argue that, in an extensional context like that of the material conditional, it is impossible to differentiate between truths and falsehoods arguing from a falsehood.  Hence, it should occasion no surprise that both truths and falsehoods can arise equally from falsehoods.

One can get away with this seemingly counterintuitive result because no connection is assumed to exist between A and B in ‘A (materially) implies B’.  There are natural language examples in which just such a situation exists.  If Hitler is alive, then I’m a monkey’s uncle.  While there is no connection between the antecedent and the consequent and both are viewed generally as false, the conditional itself is viewed as being true.  Other examples where the antecedent is false and the consequent true are what are known as counterfactuals, a subset of subjunctive conditionals.  An example is ‘if anyone jumps out of a 20 story window and falls to the ground, he will be squashed like a melon’.  This is considered to be a true conditional even though the antecedent is false.  However, this kind of conditional is non-extensional, hence, can not be considered to be a material implication. In such non-extensional contexts, strictly speaking, the term ‘entailment’ should be replaced by ‘implies’ in the sense of the material conditional, particularly since entailment is distinct from (material) implication in those cases where you need to make the distinction.

Assuming then that the material conditional is in play, this means that if the conditional in 3, A implies B, is false, this can only be because A is true and B is false. And this does not thereby imply an arbitrary proposition, q.  It may be that Bill does not have the material conditional in mind. As for the question, I interpret it in such a way that there is an implied link between A and B which indicates that Bill is not intending that ‘implies’ stands for the implication of the material conditional, but rather some sort of relevance logic, say R. Unlike the standard extensional predicate calculus, there is more than one system, R. Until that has been specified, one must rely on the logical principles found in natural language and, hence, on some kind of informal reasoning with some math tacked on as an adjunct to the argument. There is nothing wrong in doing this, except that when this takes place in mathematics, what is not spelled out is generally clearly understood. This is not yet the case for logics of relevance and entailment.

Nevertheless, any standard mathematization of economics will be based on the extensional predicate calculus which relies on the material conditional. Keynes was aware of this problem and it was one of the reasons he felt that the mathematical tools of his time were inappropriate for formalizing an economic theory that included the ‘psychological states’ of the individual (such as animal spirits). This is still, pretty much, the case today. Except that we now have game theory, which is an advance on the mathematical tools available to Keynes – although the minimax theorem that von Neumann had already proved a few years before the publication of The General Theory would not, on its own, have been of much use to Keynes.

As for inconsistency and logic, there seems to be an implicit reference to Goedel’s incompleteness theorem.  The theorem says that any logical system more complicated than arithmetic with addition will be incomplete in the sense that it will be able to say more than it will be able to prove using its own resources, if it is consistent. Goedel’s theorem is based on the system set out in Principia Mathematica and extensions of this system, which is pretty much all of standard mathematics.  Goedel achieved this result by a quite tricky use of self-reference that is not self-contradictory.  The upshot of Goedel’s result is that any sufficiently complicated formalized theory, if consistent, will be incomplete.  There will be some truths it can not prove though it can formulate them.  This result still appears to be counterintuitive over half a century later, even though Goedel was possibly the greatest logician of the 20th century.

Hypotheses and Corroboration and Data Variation I May 4, 2011

Posted by larry (Hobbes) in data, Duesenberry, economics, interpretation, Lakatos, Logic, nature of science, Statistics, Suppes.
add a comment

Duesenberry has an excellent discussion about the relationship between a theory or hypothesis and a test of that theory or hypothesis.  He correctly notes that one can never prove a hypothesis or theory but fails to give a reason why this should be so.  He also does not mention the Duhem-Quine problem in the testing of hypotheses.

To simplify the discussion, I will consider the testing of a single hypothesis, but what I say applies to theories as well.  The reason that a given hypothesis, H, can not be proved, or verified, is for logical reasons.  Most scientific hypotheses are in the form of universal generalizations.  For instance, for all x, Ax implies Bx.   Now, in order to prove or verify that all As are Bs, one would have to be able to inspect, in principle, every thing that is an A and/or a B, past, present and future.  This is impossible.  Hence, general laws that are in the form of universal generalizations can never be verified.  But they can be falsified.  Again, for logical reasons.  All you need to falsify the hypothesis that for all x, Ax implies Bx, that is, that all As are Bs, is to find an A that is not a B.  A simple example of this is the eponymous generalization that was once believed, that all swans are white, that is, that for all x, if x is a swan, then x is white.  To falsify this, you need to find one black swan, that is, one thing x that is a swan and is not white.  Not only is this possible in principle, such swans were discovered in Australia.  The major difference between a hypothesis H and a theory T is this – that a theory can be seen as a conjunction of related hypotheses.  Therefore, a hypothesis H can be viewed as a smallest theory.

There is, therefore, an asymmetry between verification and falsification – universal scientific generalizations (scientific laws) can not be verified though they can be falsified.  On the other hand, existential generalizations, of the form there is an x such that Ax, i.e., there is at least one swan, can be verified but not falsified.  It is possible to show that there is a swan by finding one, but impossible, for logical reasons similar to those above, to prove or verify that there are none on the basis that one has yet to be found.  The situation is even more complicated than I have described, involving other factors, but that is for another time. (But it is recommended that the works of Imre Lakatos, such as the methodology of scientific research programs, be consulted.  His conceptual scheme is non-trivial and more than just interesting.  And Patrick Suppes’ article on models of data (http://suppes-corpus.stanford.edu/articles/mpm/41.pdf).)  The important lesson to take away from this is that for a hypothesis to be scientific, it must be falsifiable in principle, although adhering to this requirement involves considerable complexity and is not without its difficulties.

The Duhem-Quine problem is more complicated. This is known as the method of saving the hypothesis.  According to the Duhem-Quine principle, it is always possible to save a hypothesis from falsification or refutation.  This is due to the logical nature of the testing process.  To show this, a little technical detail is required.  When a hypothesis is tested, the conditions in which the test is conducted such as the experimental or field conditions, the assumptions of the influence of the observer, and the like are also under test.  The experimental apparatus may be wrong, or the investigator may be unconsciously influencing the experiment or observation, or the test apparatus may be faulty.  This list can be extended ad infnitum, but for all practical purposes it is inevitably finite and small.  The logic of the situation is this.  Suppose you have a hypothesis H and from it you can deduce a proposition concerning an event E.  In the testing scenario illustrated above, you assume H to be true and look to see whether E is true or not.  If you find E, while you have not proved or verified that H is the case, you have, as Popper would have said, corroborated E.  That is, you have made the truth of E appear more likely.

Now, let us suppose that on the assumption that H is the case, you fail to observe E.  One can infer from this that H or something else being assumed is not the case.  The assumptions consist of H & C &B & Q, where C denotes the experimental or observational conditions, B the influence or bias of the observer, and Q any additional factors that might be influencing the outcome of the test.  So, if E is not observed, instead of falsifying H, you can save the hypothesis by rejecting C or B or Q.  You can then claim that E really does follow from H;  it is just that this test failed to substantiate this particular outcome because it was flawed.

As Duesenberry discusses, there is another factor involved in the testing of a hypothesis.  And this is that even should you succeed in corroborating H, all you have shown is that for the data at hand and under the conditions of the test, H seems to explain the data better than a set of alternatives, not that it is true simpliciter.  This state of affairs can, however, change and another hypothesis can take the place of H as the favored one.  This process of replacement can be highly contentious.

As Duesenberry himself notes, the data available to economists is often not very good.  Not only that but the variation inherent in such data remains unanalyzed more often than not.  Economists often present data in the absence of error coefficients and the like.  They also do not conduct statistical hypothesis tests of data even when it is not obvious, from ‘eye-balling’ the data, that H0 explains the data better than some alternative from a set of alternative hypotheses, H1, …, Hn, under the conditions of such an informal test scenario.  They appear to assume that the data ‘speak for themselves’, which they do not.  Data, to make sense, must be interpreted and that means placing the data in an interpretive context, that is, a theoretical context.  Otherwise,  there is no difference between a set of data and a list of numbers or names in a phone book.  In saying this, I am not arguing that statistical hypothesis testing is essential, only that it is not carried out even when it would appear to be helpful.  Irrespective of this, data should never be presented in the absence of error coefficients, unless the data differences obviously swamp any inevitable errors the data set may contain.  But how often is this going to be the case?

I must mention that this is not always the case in the present nor in the past.  Duesenberry (1949) himself cites references with statistical content – notably Keynes’ ‘A Statistical Testing of Business Cycle Theories’ (1939), Trygve Haavelmo’s’ The Probability Approach in Econometrics’ (1944), and G. Udny Yules’ ‘Why Do We Sometimes Get Nonsense Correlations’ (1926), along with eminent social psychologists such as Abram Kardner and Leon Festinger, the latter of whom’s Theory of Cognitive Dissonance has influenced Akerlof, the psychoanalyst Karen Horney (The Neurotic Personality of Our Time, 1937), and the social scientist Thorsten Veblen (Theory of the Leisure Class, 1934).  There is no reference to Talcott Parsons, who was probably one of the most famous Harvard sociologists (in the Department of Social Relations) with an economic background at the time of the publication of Duesenberry’s Income, Saving and the Theory of Consumer Behavior (1949).  It may be that, although both were at Harvard at this time, Duesenberry felt that Parsons’ approach, which was rather idiosyncratic, was rather tangential to his own.  I will come back to this issue regarding the different and possibly not easily reconcilable approaches of sociologists, anthropologists and economists to the fields of economics and political economy.

%d bloggers like this: