##
Randall Wray on Necessary & Sufficient Conditions in Economics
*July 5, 2014*

*Posted by larry in economics, Logic, money, Philosophy.*

add a comment

add a comment

In a recent post about debt-free money, Randy Wray makes some remarks about the relationship between the role of taxation and the movement of the monetary circuit. It is my purpose here to explore the character of this relationship from a logical perspective. Wray contends that taxes are a sufficient but not necessary condition for driving the monetary circuit. As his comments were not entirely clear to me, I asked him what he meant. Among other comments he made, which we will get to, he made these remarks.

If I said “taxes are a necessary and sufficient condition” then that means you must have taxes and nothing else will do. I don’t say that.

If I said taxes are a necessary condition, then that means you must have taxes, but maybe that alone is not enough. I don’t say that.

If I say taxes are a sufficient condition then that means if you have taxes you will drive a currency. However something else might drive it, too.

I think these statements are relatively clear even if not entirely logically so. In order to show what I mean by this, let me specify, how, logically, necessary and sufficient conditions are specified. First, we have a variable, x, that ranges over taxes. Now, we are in a position to specify the exact character of necessary and sufficient conditions relevant to Wray’s concerns.

Sufficient condition (for tax to drive monetary circuit): if x is a tax, then x drives the monetary circuit.

Necessary condition (for tax to drive monetary circuit): is x drives the monetary circuit, then x is a tax.

Necessary & Sufficient condition (for same): x is a tax iff x drives the monetary circuit.

It is probably apparent that my formulations do not quite conform to Wray’s formulations. We can ignore the stipulation about tax being a necessary and sufficient condition for driving the monetary circuit, as it is obviously false. Therefore, we need concentrate only on the other two. First, we must note that Wray is engaged in a complexly compacted argument. Let us take sufficient condition first.

The first sentence is nothing more than a statement of what a sufficient condition is. When he mentions that other factors may drive the circuit, he is merely pointing out that for something to be a sufficient condition for something else, this does not rule out the possibility of other sufficient conditions being operative. He is right about this; it is a logically elementary feature of the context.

When we come to Wray’s discussion of necessary conditions, we encounter a little prolixity. When Wray says that were taxes a necessary condition, this would mean that one must have taxes but that this might not be enough, it is not entirely clear, logically, what he is trying to say. For tax to be a necessary condition,this means that if you don’t have taxes (via an application of modus tollens), the circuit will stop unless some other factor can drive the circuit (like the situation with the sufficient condition). While Wray may be clear what he means, the logical structure of what he is trying to get across certainly isn’t.

However, in denying that taxes are a necessary condition for the movement of the monetary circuit,he seems to be saying more than that if x drives the monetary circuit, then x is a tax, from which follows, if x is not a tax, then x does not drive the monetary circuit. This situation does not allow one to contend, as Wray does, that if taxes were a necessary condition of monetary motion, one must have taxes. Additional assumptions are needed in order to derive this conclusion. Hence, I can do no other than to conclude that Wray’s comments in this regard obfuscate the point he is trying to make. Which I think is a straightforward one. it is that the stipulation that taxes are a necessary condition for the driving of the monetary circuit are false because he denies the implication that if something isn’t a tax, then it doesn’t drive the monetary circuit. From which one can conclude that only taxes can do this driving. Which is false. There are other factors that can drive the monetary circuit besides taxes, such as fines, tithes, and the like.

Taxes as a sufficient condition allow for these additional factors to be drivers of the monetary circuit along with taxes. Hence, Wray’s contention that taxes are only sufficient but not necessary.

Upshot: while the logical structure of Wray’s argument isn’t entirely clear, he is consistent. And the role he assigns taxes vis-a-vis the monetary circuit seems to be more than reasonable.

##
Necessary & Sufficient conditions: A Medical Example
*March 22, 2014*

*Posted by larry in Logic, Medicine, Science.*

add a comment

add a comment

A sufficient condition A for B is one where A being the case is sufficient for bringing about B.

A necessary condition B for A is one where if B is not the case, then A won’t be either.

Logically it looks like this. A is sufficient for B: if A, then B.

B is necessary for A: if not-B, then not-A.

Ex. of a necessary condition: oxygen (B) is necessary for being able to breathe (A). Therefore, if not-B, then not-A.

It is easier to come up with necessary conditions than it is sufficient conditions. For instance, what is sufficient for being able to breathe?

A set of necessary and sufficient conditions for A is often considered to be equivalent to, or for, A.

A slightly more realistic and complicated way of expressing this set of relationships is this. A is sufficient for D and B is necessary for D. This translates to (if A, then D) and (if D, then B). The contrapositive of each yields (if not-D, then not A) and (if not-B, then not-D). It is relatively clear, I think, that A and B each have a distinct relationship to D (I am ignoring the issue of transitivity illustrated in the example.). The potential complexity of this relationship is borne out is the following medical example from research into the dementias.

Here is a quote from a medical investigation of causes of Alzheimer’s and other dementias (from ** Daily Kos**).

“Researchers have found that a protein active during fetal brain development, called REST, switches back on later in life to protect aging neurons from various stresses, including the toxic effects of abnormal proteins. But in those with Alzheimer’s and mild cognitive impairment, the protein — RE1-Silencing Transcription factor — is absent from key brain regions.”

“Our work raises the possibility that the abnormal protein aggregates associated with Alzheimer’s and other neurodegenerative diseases may not be sufficient to cause dementia; you may also need a failure of the brain’s stress response system,” said Bruce Yankner, Harvard Medical School professor of genetics and leader of the study, in a release.”

While the situation is more complicated than the simple example I gave initially, the logic is the same.

From the quote, we have: protein aggregate A; failure brain stress response system B; absence of RE1 (=R); dementia D.

Second paragraph of the above quote may be saying that A & B is sufficient for D.

But from the first paragraph, we also have absence of R (RE1) as a necessary condition for D. I.e., if D, then not-R (absence of RE1).

So, A&B is sufficient for D, hence (if A&B, then D). But, not-R is necessary for D. Or, equivalently, (if R, then not-D). I.e., R is sufficient for not-D.

Similarly as in the simple example: A, B, and R are related in a complex way to D, a relationship that is not entirely spelled out in the quote.

We are, therefore, left with an important question: what is status of A & B with respect to R? Is R a component of either A or B? The quote doesn’t link all these factors together. While this may be obvious from the quote, the logical situation underlying this hiatus may not be clear. Hopefully, it now is and also clearer what additional relationships need to be explored in order to lead to better understanding of the dementias, particularly Alzheimer’s, and thereby better control of their onset and progression if not complete defeat.

##
Bayes’ theorem: a comment on a comment
*March 10, 2014*

*Posted by larry in Bayes' theorem, Logic, Philosophy, Statistics.*

add a comment

add a comment

Assume the standard axioms of set theory, say, the Zermelo-Fraenkel axioms.

Then provide a definition of conditional probability:

1. , which yields the identity

1a. , via simple algebraic cross multiplication.

Because set intersection is commutative, you can have this:

What we have here is a complex, contextual definition relating a term, *P**, *from probability theory with a newly introduced stroke operator, |, read as “given”, so the locution becomes, for instance, the probability, *P*, of *A* given *B*. Effectively, the definition is a contextual definition of the stroke operator, |, “given”.

Although set intersection (equivalent in this context to conjunction) is commutative, conditional probability isn’t, which is due to the asymmetric character of the stroke operator, |. This means that, in general, *P*(*A|B) ≠ P*(*B|A*). If we consider the example of Data vs. Hypothesis, we can see that in general, for *A* = *Hypothesis* and *B* = *Data*, that *P*(*Hypothesis*|*Data*) *≠ P*(*Data*|*Hypothesis*).

Now, from the definition of “conditional probability” and the standard axioms of set theory which have already been implicitly used, we obtained Bayes’ theorem trivially, mathematically speaking, via a couple of simple substitutions.

Or the Bayes-Laplace theorem, since Laplace discovered the rule independently. However, according to Stigler’s rule of eponymy in mathematics, theorems are invariably attributed to the wrong person (Stigler, “Who Discovered Bayes’ Theorem?”. In Stigler, *Statistics on the Table*, 1999).

Now, since we have seen that Bayes’ theorem follows from the axioms of set theory plus the definition of “conditional probability”, the following comments from a recent tutorial text on Bayes’ theorem can only be interpreted as being odd. The following quote is from James V. Stones’ *Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis* (3^{rd} printing, Jan. 2014).

If we had to establish the rules for calculating with probabilities, we would insist that the result of such calculations must tally with our everyday experience of the physical world, just as surely as we would insist that 1+1 = 2. Indeed, if we insist that probabilities must be combined with each other in accordance with certain common sense principles then Cox (1946) showed that this leads to a unique set of rules, a set which includes Bayes’ rule, which also appears as part of Kolmogorov’s (1933) (arguably, more rigorous) theory of probability (Stone: pp. 2-3).

Bayes’ theorem does not form part of Kolmogorov’s set of axioms. Strictly speaking, Bayes’ rule must be viewed as a logical consequence of the axioms of set theory, the Kolmogorov axioms of probability, and the definition of “conditional probability”.

Whether Kolmogorov’s axioms for probability tally with our experience of the real world is another question. The axioms are sometimes used as indications of non-rational thought processes in certain psychological experiments, such as the Linda experiment by Tversky and Kahneman. (For an alternative interpretation of this experiment that brings into question the assumption that people either do or should reason according to a simple application of the Kolmogorov axioms, cf. Luc Bovens & Stephan Hartmann, *Bayesian Epistemology*, 2003: 85-88).

### A matter of interpretation

In the discussion above, the particular set theory and the Kolmogorov axioms mentioned and used were interpreted via the first-order extensional predicate calculus. This means that both theories can be viewed as not involving intensional contexts such as beliefs. The probability axioms in particular were understood by Kolomogorov and others using them as relating to objective frequencies and applicable to the real world, not to beliefs we might have about the world. For instance, an unbiased coin and die, in the ideal case admittedly, are considered to have a .5 and 1/6 (or .1666) probability for the side of the coin and a side of a six-sided die, respectively, on each flip or throw of the object in question. In these two particular cases, it is only via behavior observed over a long period of time that can produce data that will show whether in fact our assumption that the coin and the die are unbiased is true or not.

Why does this matter. Simply because Bayes’ theorem has been interpreted in two distinct ways – as a descriptively objective statement about the character of the world and as a subjective statement about a users’ beliefs about the state of the world. The derivation above derives from two theories that are considered to be non-subjective in character. One can then reasonably ask: where does the subjective interpretation of Bayes’ theorem come from? Two answers suggest themselves, though these are not the only ones. One is that Bayes’ theorem is arrived at via a different derivation than the one I considered, relying, say, on a different notion of probability than that of Kolmogorov’s. The other is that Bayesian subjectivity is introduced by means of the stroke (or ‘given’) operator, |.

Personally, I see nothing subjective about statements concerning the probability of obtaining a H or a T on the flip of a coin as being .5 or that of obtaining one particular side of a 6-sided die being .166. These probabilities are about the objects themselves, and not about our beliefs concerning them. Of course, this leaves open the possibility of alternative interpretations of probabilities in other contexts, say the probability of guilt or non-guilt in a jury trial. Whether the notions of probability involving coins or dice are the same as those involving situations such as jury trials is a matter for further debate.

##
Larry Gonick’s cartoon guide to Calculus
*February 6, 2012*

*Posted by larry in Logic.*

add a comment

add a comment

If you go to the link below, and then click on the book image, you can look inside the book. Then go to the discussion of Zeno’s motion paradox. He represents Zeno’s paradox well, but the solution isn’t quite what he suggests it is. Newton and Leibnitz got around Zeno’s motion paradox but Gonick doesn’t quite say what the error in Zeno’s argument was.

The solution to Zeno’s paradox is to realize that there is no such thing as a next instant. Between any two given instants, there is a nondenumerable (uncountable) number of instants. Think of the real number system as a time line. Between any two numbers, or instants of time, however close together you pick them, there are an infinite number of numbers, or instants of time, between them. Hence, no next instant or number.

This is in contrast to the set of integers where there is a next number – this set of numbers is countably infinite. The real line is uncountably infinite, otherwise referred to as the continuum. There are at least two kinds of infinite sets, those that are denumerable (even = odd = rationals in size) and those that are nondenumerable (reals = irrationals in size).

Newton and Leibnitz got around Zeno’s paradox by conceiving of infinitesimals in their independent developments of the differential calculus. The notation dy/dx indicates an infinitesimal change of y and x with respect to one another. A differentiable function is deemed to be continuous, which means that we are dealing with the real line, that is, the real number system. Hence, on a line of a graph, there is no next point, just as there is no next instant of time.

To assume that there was a next instant was Zeno’s error. So, in Zeno’s case, an arrow travels through an uncountably infinite numbers of instants on its way from a to b, no one instant of which can be claimed to be next to another.

http://www.amazon.co.uk/Cartoon-Guide-Calculus-Guides/dp/0061689092/ref=pd_sim_b_4

I hope that is clear. Whew!

The mathematical theory of infinitesimals wasn’t formally resolved until the latter half of the 20th century by Abraham Robinson.

##
Detail specificity depends on context: exs – arithmetic & GDP vs GNP
*September 12, 2011*

*Posted by larry in economics, Logic.*

Tags: degree of detail, economics, philosophy

add a comment

Tags: degree of detail, economics, philosophy

add a comment

When it comes to applied contexts in some sciences such as economics, there is sometimes a feeling that a discussion is overly detailed. This is sometimes true. But I would like to use an example from arithmetic to show that this is ultimately the case. The principle I am working with is the one that stipulates that one should be as simple as one can be but no simpler, a principle that dates at least back to Ockham.

When one sets out the fundamental principles of arithmetic, say those of addition and multiplication, you find statements like these:

x + y = y + x

and

x + 0 = x.

The first says that addition is commutative, that is, that it doesn’t matter in what order you add two numbers together. The second principle says that zero is the identity element for addition – any number added to zero equals itself.

Now, it may be thought that these are obvious and therefore need no explicit representation. But this would be a misunderstanding of the context these principles operate in. For example, multiplication is commutative, that is, x * y = y * x. But division is not commutative, that is, x ÷ y does not equal y ÷ x. Neither is subtraction commutative.

And zero is not an identity element for multiplication, for x * 0 ≠ x. The identity element for multiplication is 1.

The reason such strict distinctions must be made is that it is easy to extrapolate incorrectly from one arithmetic operation to another where it won’t work correctly.

Something similar could be said about certain distinctions between certain chemical elements, say U235 and U238. While the difference could be said to be small, it is nevertheless a momentous one. You can make a bomb with one but not the other.

The same goes for a theoretical and applied social science* like economics. Take the distinction between GDP, Gross Domestic Product, and GNP, Gross National Product, which are two distinct measures of a nation’s economic productivity. States used to use GNP but changed over time to GDP.

These measures are very similar but in certain contexts one may be preferable to the other. GNP measures productivity in terms of ownership regardless of where the ownership lies. For instance, it counts income made overseas as part of the nation’s income. GDP, on the other hand, measures a nation’s productivity that takes place within its territorial border whether produced by a foreign firm or not. These two measures can produce different accountings of total output.

The reason given by the Bureau of Economic Analysis for making the change from GNP to GDP, which took place in the early 1990s, was that it made international comparisons more direct and facilitated comparisons among different kinds of economic activities. Some economists deplore the change from GNP to GDP, contending that the latter is a misleading indicator of actual productivity. Whether it is or not depends on the context, even if the measures do not differ much in value. US GDP in 2009 was estimated by the BEA at $14.119 trillion and its GNP at $14.265 trillion. Whether the value itself or the method by which it was obtained is considered significant/meaningful will be largely determined by the context, including the theoretical framework where the interpretation of these and related measures are embedded.

As William James once wrote: in order for a difference to be a difference, it must make a difference. Sometimes the context is all.

For a detailed discussion of various alternatives from one economic perspective, see **ALTERNATIVES TO GROSS NATIONAL PRODUCT: A Critical Survey** (1998) by Richard England & Jonathan Harris. Whether GDP or GNP are adequate measures of a nation’s economic well-being is another matter.

—————————

* The social sciences are sciences although they are not identical to physics, nor should physics be pursued as an ideal, as happens in the pseudo-field of econophysics.

##
Logic of conditionals re Bill Mitchell’s quiz
*May 4, 2011*

*Posted by larry in economics, Goedel, interpretation, Logic, material conditional.*

add a comment

add a comment

In the most recent quiz by Bill Mitchell (http://bilbo.economicoutlook.net/blog/?p=14313&cpage=1#comment-17186), there was a brief discussion that did not clarify the problem under discussion. This problem was the nature of the logical of conditionals, in particular, the logic of the material conditional. Part of my response was not quite relevant and I would like to clear up any confusion I may have created here.

The problem was on how to interpret a conditional assertion made by Bill Mitchell. The commenters who dealt with this problem were Tom Hickey and MamMoTH. Mitchell asked the following question and whether it was true or false. In claiming Q 3 was false, Mitchell went on to show why this was the case without explicating the logic of the situation, no doubt because he thought it was obvious.

The question was:

If the stock of aggregate demand growth outstrips the capacity of the productive sector to respond by producing extra real goods and services then inflation is inevitable.

He then claimed that assertion was false. I am not here concerned with whether this assertion is false but with the logic of the situation. MamMoTH said:

According to the rules of logic, the correct answer to question 3 is

Truebecause False implies whatever is True. … @Tom, not sure what you mean, but a quick check on entailment with wikipedia to brush up some concepts shows that if S1 is inconsistent then S1 entails whatever.

Tom Hickey followed this with:

This is true of material implication but not formal implication (entailment). Most ordinary language arguments based on conditionals presume entailment.

While Hickey is right that a good many natural language arguments presuppose that the conditional being used is that of some kind of entailment, what MamMoTH says is not entirely right in the context of the material conditional. He appears to be confusing a logic of the conditional and a consequence of the definition of material implication with something else, perhaps Goedel’s incompleteness result.

In standard extensional (truth-functional) logic with the material conditional, the conditional is defined in such a way that ‘A implies B’ is true whenever A is false or B is true. All standard extensional systems are like this. However, in a non-standard logical system, particularly one with relevance conditions attached, this property is undesirable. Unfortunately, the term ‘entailment’ is loosely used by many, but the *locus classicus* for a logical system of entailment is Anderson and Belnap’s *Entailment: The Logic of Relevance and Necessity*. There are also logical systems in which contradictions, under certain conditions, are acceptable, but these are non-standard, too.

It seems to me that, in stating that ‘if S1 is inconsistent, then S1 implies whatever’, MamMoTH may have been implicitly referring to Goedel’s incompleteness theorem or a consequence of extensional logic with the material conditional, to wit, that if p & not-p then q, that is, that anything follows from a contradiction. I am assuming that what is meant by ‘if S1 is inconsistent’ is actually meant ‘if S1 is self-inconsistent’, for if S1 were self-inconsistent, then S1 might be of the form p & not-p, a logical contradiction. For if this were so, then S1 implying whatever would follow from its self-inconsistency. Simple inconsistency is not enough to obtain this result. Consider S1 = A or (B & not-B). Here a portion of S1 contradicts itself but S1 itself is not logically contradictory in the sense that an assignment of values to the variables of S1 will inevitably render S1 false. Since B & not-B is always false, then the value of S1 depends solely on A, which is not itself of the form, p & not-p. Therfore, ignoring B, if A is true, S1 is true, while if A is false, S1 is false. Is this happenstance sufficient to thereby render S1 inconsistent and thereby imply an arbitrary proposition q? If so, this is not the standard meaning of the notion of inconsistency.

Let us go on to the second matter. It is not true that anything follows from a falsehood, while it is true that anything follows from a logical contradiction, that is, a proposition that is always false. This situation can be confused with the definition of the material conditional. The material conditional is defined in such a way that ‘A (materially) implies B’ is false if and only if (iff) A is true and B is false. In setting up any deductive logical system, it is essential that no true statement ever lead to a falsehood. However, in the case of the material conditional, a falsehood leading to a falsehood is considered to be a true ‘material’ implication, though not generally in a logic of relevance or entailment. That is, ‘F implies F’ is true when ‘implies’ is material implication. There are good extensional mathematical reasons for this. However, a justification can be made that it should be the case that falsehoods should follow from falsehoods.

A problem arises because for material implication, ‘F implies T’ is also true, which appears to be unacceptable. Nevertheless, a justification can be given. One can argue that, in an extensional context like that of the material conditional, it is impossible to differentiate between truths and falsehoods arguing from a falsehood. Hence, it should occasion no surprise that both truths and falsehoods can arise equally from falsehoods.

One can get away with this seemingly counterintuitive result because no connection is assumed to exist between A and B in ‘A (materially) implies B’. There are natural language examples in which just such a situation exists. If Hitler is alive, then I’m a monkey’s uncle. While there is no connection between the antecedent and the consequent and both are viewed generally as false, the conditional itself is viewed as being true. Other examples where the antecedent is false and the consequent true are what are known as counterfactuals, a subset of subjunctive conditionals. An example is ‘if anyone jumps out of a 20 story window and falls to the ground, he will be squashed like a melon’. This is considered to be a true conditional even though the antecedent is false. However, this kind of conditional is non-extensional, hence, can not be considered to be a material implication. In such non-extensional contexts, strictly speaking, the term ‘entailment’ should be replaced by ‘implies’ in the sense of the material conditional, particularly since entailment is distinct from (material) implication in those cases where you need to make the distinction.

Assuming then that the material conditional is in play, this means that if the conditional in 3, A implies B, is false, this can only be because A is true and B is false. And this does not thereby imply an arbitrary proposition, q. It may be that Bill does not have the material conditional in mind. As for the question, I interpret it in such a way that there is an implied link between A and B which indicates that Bill is not intending that ‘implies’ stands for the implication of the material conditional, but rather some sort of relevance logic, say R. Unlike the standard extensional predicate calculus, there is more than one system, R. Until that has been specified, one must rely on the logical principles found in natural language and, hence, on some kind of informal reasoning with some math tacked on as an adjunct to the argument. There is nothing wrong in doing this, except that when this takes place in mathematics, what is not spelled out is generally clearly understood. This is not yet the case for logics of relevance and entailment.

Nevertheless, any standard mathematization of economics will be based on the extensional predicate calculus which relies on the material conditional. Keynes was aware of this problem and it was one of the reasons he felt that the mathematical tools of his time were inappropriate for formalizing an economic theory that included the ‘psychological states’ of the individual (such as animal spirits). This is still, pretty much, the case today. Except that we now have game theory, which is an advance on the mathematical tools available to Keynes – although the minimax theorem that von Neumann had already proved a few years before the publication of *The General Theory* would not, on its own, have been of much use to Keynes.

As for inconsistency and logic, there seems to be an implicit reference to Goedel’s incompleteness theorem. The theorem says that any logical system more complicated than arithmetic with addition will be incomplete in the sense that it will be able to say more than it will be able to prove using its own resources, if it is consistent. Goedel’s theorem is based on the system set out in Principia Mathematica and extensions of this system, which is pretty much all of standard mathematics. Goedel achieved this result by a quite tricky use of self-reference that is not self-contradictory. The upshot of Goedel’s result is that any sufficiently complicated formalized theory, if consistent, will be incomplete. There will be some truths it can not prove though it can formulate them. This result still appears to be counterintuitive over half a century later, even though Goedel was possibly the greatest logician of the 20^{th} century.

##
Hypotheses and Corroboration and Data Variation I
*May 4, 2011*

*Posted by larry in data, Duesenberry, economics, interpretation, Lakatos, Logic, nature of science, Statistics, Suppes.*

add a comment

add a comment

Duesenberry has an excellent discussion about the relationship between a theory or hypothesis and a test of that theory or hypothesis. He correctly notes that one can never prove a hypothesis or theory but fails to give a reason why this should be so. He also does not mention the Duhem-Quine problem in the testing of hypotheses.

To simplify the discussion, I will consider the testing of a single hypothesis, but what I say applies to theories as well. The reason that a given hypothesis, H, can not be proved, or verified, is for logical reasons. Most scientific hypotheses are in the form of universal generalizations. For instance, for all x, Ax implies Bx. Now, in order to prove or verify that all As are Bs, one would have to be able to inspect, in principle, every thing that is an A and/or a B, past, present and future. This is impossible. Hence, general laws that are in the form of universal generalizations can never be verified. But they can be falsified. Again, for logical reasons. All you need to falsify the hypothesis that for all x, Ax implies Bx, that is, that all As are Bs, is to find an A that is not a B. A simple example of this is the eponymous generalization that was once believed, that all swans are white, that is, that for all x, if x is a swan, then x is white. To falsify this, you need to find one black swan, that is, one thing x that is a swan and is not white. Not only is this possible in principle, such swans were discovered in Australia. The major difference between a hypothesis H and a theory T is this – that a theory can be seen as a conjunction of related hypotheses. Therefore, a hypothesis H can be viewed as a smallest theory.

There is, therefore, an asymmetry between verification and falsification – universal scientific generalizations (scientific laws) can not be verified though they can be falsified. On the other hand, existential generalizations, of the form there is an x such that Ax, i.e., there is at least one swan, can be verified but not falsified. It is possible to show that there is a swan by finding one, but impossible, for logical reasons similar to those above, to prove or verify that there are none on the basis that one has yet to be found. The situation is even more complicated than I have described, involving other factors, but that is for another time. (But it is recommended that the works of Imre Lakatos, such as the methodology of scientific research programs, be consulted. His conceptual scheme is non-trivial and more than just interesting. And Patrick Suppes’ article on models of data (http://suppes-corpus.stanford.edu/articles/mpm/41.pdf).) The important lesson to take away from this is that for a hypothesis to be scientific, it must be falsifiable in principle, although adhering to this requirement involves considerable complexity and is not without its difficulties.

The Duhem-Quine problem is more complicated. This is known as the method of saving the hypothesis. According to the Duhem-Quine principle, it is always possible to save a hypothesis from falsification or refutation. This is due to the logical nature of the testing process. To show this, a little technical detail is required. When a hypothesis is tested, the conditions in which the test is conducted such as the experimental or field conditions, the assumptions of the influence of the observer, and the like are also under test. The experimental apparatus may be wrong, or the investigator may be unconsciously influencing the experiment or observation, or the test apparatus may be faulty. This list can be extended ad infnitum, but for all practical purposes it is inevitably finite and small. The logic of the situation is this. Suppose you have a hypothesis H and from it you can deduce a proposition concerning an event E. In the testing scenario illustrated above, you assume H to be true and look to see whether E is true or not. If you find E, while you have not proved or verified that H is the case, you have, as Popper would have said, corroborated E. That is, you have made the truth of E appear more likely.

Now, let us suppose that on the assumption that H is the case, you fail to observe E. One can infer from this that H or something else being assumed is not the case. The assumptions consist of H & C &B & Q, where C denotes the experimental or observational conditions, B the influence or bias of the observer, and Q any additional factors that might be influencing the outcome of the test. So, if E is not observed, instead of falsifying H, you can save the hypothesis by rejecting C or B or Q. You can then claim that E really does follow from H; it is just that this test failed to substantiate this particular outcome because it was flawed.

As Duesenberry discusses, there is another factor involved in the testing of a hypothesis. And this is that even should you succeed in corroborating H, all you have shown is that for the data at hand and under the conditions of the test, H seems to explain the data better than a set of alternatives, not that it is true *simpliciter*. This state of affairs can, however, change and another hypothesis can take the place of H as the favored one. This process of replacement can be highly contentious.

As Duesenberry himself notes, the data available to economists is often not very good. Not only that but the variation inherent in such data remains unanalyzed more often than not. Economists often present data in the absence of error coefficients and the like. They also do not conduct statistical hypothesis tests of data even when it is not obvious, from ‘eye-balling’ the data, that H0 explains the data better than some alternative from a set of alternative hypotheses, H1, …, Hn, under the conditions of such an informal test scenario. They appear to assume that the data ‘speak for themselves’, which they do not. Data, to make sense, must be interpreted and that means placing the data in an interpretive context, that is, a theoretical context. Otherwise, there is no difference between a set of data and a list of numbers or names in a phone book. In saying this, I am not arguing that statistical hypothesis testing is essential, only that it is not carried out even when it would appear to be helpful. Irrespective of this, data should never be presented in the absence of error coefficients, unless the data differences obviously swamp any inevitable errors the data set may contain. But how often is this going to be the case?

I must mention that this is not always the case in the present nor in the past. Duesenberry (1949) himself cites references with statistical content – notably Keynes’ ‘A Statistical Testing of Business Cycle Theories’ (1939), Trygve Haavelmo’s’ The Probability Approach in Econometrics’ (1944), and G. Udny Yules’ ‘Why Do We Sometimes Get Nonsense Correlations’ (1926), along with eminent social psychologists such as Abram Kardner and Leon Festinger, the latter of whom’s Theory of Cognitive Dissonance has influenced Akerlof, the psychoanalyst Karen Horney (*The Neurotic Personality of Our Time*, 1937), and the social scientist Thorsten Veblen (*Theory of the Leisure Class*, 1934). There is no reference to Talcott Parsons, who was probably one of the most famous Harvard sociologists (in the Department of Social Relations) with an economic background at the time of the publication of Duesenberry’s *Income, Saving and the Theory of Consumer Behavior* (1949). It may be that, although both were at Harvard at this time, Duesenberry felt that Parsons’ approach, which was rather idiosyncratic, was rather tangential to his own. I will come back to this issue regarding the different and possibly not easily reconcilable approaches of sociologists, anthropologists and economists to the fields of economics and political economy.

##
Economics, Logic & Science
*January 9, 2011*

*Posted by larry in economics, Logic, Science.*

add a comment

add a comment

I would like to reflect on Bill Mitchell’s observation on Sunday, 9 January in his blog, that “Macroeconomics is hard to learn because it involves these abstract variables that are never observed”, such as the interest rate and the aggregate price level.

This is undoubtedly true. However, it seems to me that there are at least three other factors involved. I also want to propose a more useful set of models as a salutary basis for a scientific grounding for economic theory.

**First:** the abstractions economists use are not often tied closely enough to concrete examples. Take the difference between nominal GDP and real GDP. While I understand that Bill is being concise and assuming a certain level of understanding when he refers to these, many people don’t understand the difference without practical illustrations. [See * fn below]

**Second:** one economist can take a given set of data and conclude A, while another, taking the same data, may conclude not-A and never mention why it would be wrong to conclude A on the basis of the data at hand. Consider what is going on *re* the current crisis. This latter propensity does tend to make economics look more like an ideological enterprise than a rationalist, scientific one (a la Lakatos). I am not suggesting Bill does the latter, only that it is often done. For someone not trained in a social science, both of these factors combined can make economic discourse look like exercises in nonsense. (Part of the reason for this lies in the values that are smuggled in, explicitly or implicitly, in assessments of the economic state of a society at a given point in time. These values form part of a complete economic explanation, but I must leave this aside for the moment.)

**Third:** A further contributing factor in certain circumstances is where economists give the impression that economics is like physics. This looks premature at best. Given that a standard formalization of physics, by no means complete, is in terms of an **extensional** predicate calculus of fourth order and that certain economic explanations need to refer to psychological states of actors, economic formalization would need to be intensional in character, i.e., non-extensional (non-truth functional). There are a number of intensional logics, but none developed with economics in mind. This renders physics a misleading guide for economics.

**Finally** – and more positively – I want to propose that theoretical ecology might be a more relevant mathematical scenario for economists to refer to as a guide. Ecological models tend to be quite specific to certain species and environments or to specific activities found in certain predator-prey relations (such as the Lotka-Volterra equations that some economists refer to). This would suggest specific models for specific situations over general models. Ecological models don’t include the psychological states of the animals, or not directly. They deal with the animals’ behavior, not their mental states, whatever these may be.

Ecologists know that in dealing with human behavior, they need to take mental states into account in some way. This can’t be done with the current mathematical tools that are available. Thus, they are unable to completely mathematize these accounts. Since most of their work does not need to take account of human mental states, most of their modeling can safely ignore this. But they can’t say that it is thereby irrelevant altogether in a complete account that included human activity. Which is what might be concluded were physics considered to be the field to be “mimicked”. Hence, if economists were to view their theoretical activity as analogous to that of ecologists, it might be easier to view whatever degree of mathematization that is carried out as an incomplete approximation and, thus, **where** and **what** further developments are needed.

My point, which is not a deep one, is that it isn’t only the abstractness of the discussion that makes economic discourse hard for the uninitiated to follow, it is also the lack of clear, concrete applications employed as explanatory aids in an appropriate logical context.

_____________________________

*I want to take this a bit further, simplifying if I may, using the predicate calculus as an example and, in particular, the definition of “if, then” or “if …, then ___”. In the propositional calculus, as I expect you know, the variables range over sentences. Hence, where A and B are arbitrary sentences, “if A then B” is also a sentence. Now, these are not just any old sentences, but declarative sentences, such as “John is a large man”, or “the man in the car is carrying a gun”. It also helps if one explains the ‘use-mention’ distinction.

In the predicate calculus, we have individual variables and predicate variables, covering nouns, pronouns, and adjectives (adverbs are explicitly avoided). We then have the universal and existential quantifiers, for which I shall use A and E. I’m sure you are familiar with all this. (But a superb discussion of logical grammar can be found in Belnap, “Grammatical Propadeutic”, in Anderson and Belnap, *Entailment* (vol. 1).)

At this point, the general reader benefits greatly from a concrete example. If a character in a B-western is wearing a black shirt and a black hat, then this character in such a film is a villain. If this character in such a film is a villain, then this character will either end up dead or in jail at the end of the film. Simplifying, we can restate the situation this way. Every character in a B-western wearing a black shirt and a black hat is a villain. Every villain in a B-western ends up dead or in jail at the end of the film.

This can be formalized as: Ax(if Wx then, Bx); Ax(if Bx, then Jx). We can conclude from this, via certain rules of inference and other assumptions, that Ax( Wx, then Jx), that is, every character in a B-western wearing a black shirt and a black hat will end up dead or in jail at the end of the film. While this is logically trivial, it may not be trivial in a setting where someone is trying to figure out how to do this sort of thing. It is not logically trivial, however, to formalize the following: it is an ill wind that blows nobody any good.

ADDENDUM: To see a rather concise, albeit incomplete, discussion of the complexity of the scientific enterprise, one can do no better than to have a look at Patrick Suppes’ Models of Data from 1962.

##
Problem analysis vs. Problem solutions
*January 26, 2009*

*Posted by larry in economics, Logic.*

Tags: analysis, economics, logic, solution

4 comments

Tags: analysis, economics, logic, solution

4 comments

Unfortunately, there is no necessary logical connection between the analysis of a problem and its solution. The two are essentially independent. So, someone could analyze a problem correctly but provide incorrect proposals for solving said problem. In accepting Roubini’s assessment of the problem, we do not need to accept his ideas on how it can be solved or what he thinks might be poor solutions, as his philosophical/theoretical stance will largely determine what he believes to be viable.

This is a problem with a lot of discussions in economics. Lots of assumptions not made either explicit enough or at all. Others introduce ideas that they think are new but have actually been around for years. For example, Soros’s introduction of the notion of reflexivity (a kind of feedback loop in economic behavior) has been a known problem in philosophy and parts of social science for over 50 years. Yet most economists fail to acknowledge this except for Soros and a few others. Neoclassical economists like Stigler and Friedman ignored it completely and possibly weren’t even aware of the issue. Keynes was aware of this problem though he didn’t discuss it in these terms.

Since a solution of a serious economic problem invariably involves a political (policy) dimension, economists aren’t very good at incorporating socio-political policy considerations into their analyses, hence their solutions should be inspected closely.