Piketty & the recent US Supreme Court decision April 3, 2014Posted by Larry in economics, Justice, Philosophy.
add a comment
“…[I]n the long run, the return on capital tends to be greater than the growth rate of the economies in which that capital is located.
What this means [concludes Campos] is that in a modern market economy the increasing concentration of wealth in the hands of the already-rich is as natural as water flowing downhill, and can only be ameliorated by powerful political intervention, in the form of wealth redistribution via taxes, and to a lesser extent laws that systematically protect labor from capital. (Piketty argues that, because of historical circumstances that are unlikely to be repeated, this sort of intervention happened in the Western world in general, and in America in particular, between the First World War and the early 1970s.)”
This is as pessimistic as it gets. What is not mentioned in most discussions of this outcome is the view put forward a few years ago by Republicans that corporations are individuals, that is, people. This is a logical howler of immense proportions, but in this context, such fine distinctions seem to be irrelevant. It is this view that partially underpins the Supreme Court decision that, just like people, corporations have free speech rights that need protection. This apparently includes the way that their CEOs and board members spend the corporation’s money.
Piketty’s data is extensive and runs for hundreds of years. It is a door-stopper of a book, around 600 pages of text and 100 pages of notes, not including the material Piketty has placed online. One could be forgiven for concluding that the greater the economic inequality, the greater the chance of political plutocracy and that this is the inevitable consequence of the political implementation of neoclassical economic principles. This relationship seems to me to inevitably entail that politics can not be divorced from economics, which is a central tenet of the neoclassical paradigm. It seems to follow, therefore, as the night the day, that since politics can never be value-free, the idea that economics can be, as claimed by the neoclassical econs, is a non-starter.
I would like to think that this constitutes another nail in the coffin of the neoclassical paradigm, but it doesn’t look like it. Despite the mounting evidence against it, the dominant economic paradigm rumbles on with hardly a hesitation along the way.
Also have a look at this and the links therein: http://www.slate.com/blogs/moneybox/2014/04/02/wealth_inequality_is_it_worse_than_we_thought.html. It is about new research by Saez and Zucman on US wealth disparity. As has been pointed out previously by researchers, it isn’t the 1% who are the biggest gainers from this financial crisis, it is the 0.1%, the 1/10th of 1%.
Necessary & Sufficient conditions: A Medical Example March 22, 2014Posted by Larry in Logic, Medicine, Science.
add a comment
A sufficient condition A for B is one where A being the case is sufficient for bringing about B.
A necessary condition B for A is one where if B is not the case, then A won’t be either.
Logically it looks like this. A is sufficient for B: if A, then B.
B is necessary for A: if not-B, then not-A.
Ex. of a necessary condition: oxygen (B) is necessary for being able to breathe (A). Therefore, if not-B, then not-A.
It is easier to come up with necessary conditions than it is sufficient conditions. For instance, what is sufficient for being able to breathe?
A set of necessary and sufficient conditions for A is often considered to be equivalent to, or for, A.
A slightly more realistic and complicated way of expressing this set of relationships is this. A is sufficient for D and B is necessary for D. This translates to (if A, then D) and (if D, then B). The contrapositive of each yields (if not-D, then not A) and (if not-B, then not-D). It is relatively clear, I think, that A and B each have a distinct relationship to D (I am ignoring the issue of transitivity illustrated in the example.). The potential complexity of this relationship is borne out is the following medical example from research into the dementias.
Here is a quote from a medical investigation of causes of Alzheimer’s and other dementias (from Daily Kos).
“Researchers have found that a protein active during fetal brain development, called REST, switches back on later in life to protect aging neurons from various stresses, including the toxic effects of abnormal proteins. But in those with Alzheimer’s and mild cognitive impairment, the protein — RE1-Silencing Transcription factor — is absent from key brain regions.”
“Our work raises the possibility that the abnormal protein aggregates associated with Alzheimer’s and other neurodegenerative diseases may not be sufficient to cause dementia; you may also need a failure of the brain’s stress response system,” said Bruce Yankner, Harvard Medical School professor of genetics and leader of the study, in a release.”
While the situation is more complicated than the simple example I gave initially, the logic is the same.
From the quote, we have: protein aggregate A; failure brain stress response system B; absence of RE1 (=R); dementia D.
Second paragraph of the above quote may be saying that A & B is sufficient for D.
But from the first paragraph, we also have absence of R (RE1) as a necessary condition for D. I.e., if D, then not-R (absence of RE1).
So, A&B is sufficient for D, hence (if A&B, then D). But, not-R is necessary for D. Or, equivalently, (if R, then not-D). I.e., R is sufficient for not-D.
Similarly as in the simple example: A, B, and R are related in a complex way to D, a relationship that is not entirely spelled out in the quote.
We are, therefore, left with an important question: what is status of A & B with respect to R? Is R a component of either A or B? The quote doesn’t link all these factors together. While this may be obvious from the quote, the logical situation underlying this hiatus may not be clear. Hopefully, it now is and also clearer what additional relationships need to be explored in order to lead to better understanding of the dementias, particularly Alzheimer’s, and thereby better control of their onset and progression if not complete defeat.
Bayes’ theorem: a comment on a comment March 10, 2014Posted by Larry in Bayes' theorem, Logic, Philosophy, Statistics.
add a comment
Assume the standard axioms of set theory, say, the Zermelo-Fraenkel axioms.
Then provide a definition of conditional probability:
Because set intersection is commutative, you can have this:
What we have here is a complex, contextual definition relating a term, P, from probability theory with a newly introduced stroke operator, |, read as “given”, so the locution becomes, for instance, the probability, P, of A given B. Effectively, the definition is a contextual definition of the stroke operator, |, “given”.
Although set intersection (equivalent in this context to conjunction) is commutative, conditional probability isn’t, which is due to the asymmetric character of the stroke operator, |. This means that, in general, P(A|B) ≠ P(B|A). If we consider the example of Data vs. Hypothesis, we can see that in general, for A = Hypothesis and B = Data, that P(Hypothesis|Data) ≠ P(Data|Hypothesis).
Now, from the definition of “conditional probability” and the standard axioms of set theory which have already been implicitly used, we obtained Bayes’ theorem trivially, mathematically speaking, via a couple of simple substitutions.
Or the Bayes-Laplace theorem, since Laplace discovered the rule independently. However, according to Stigler’s rule of eponymy in mathematics, theorems are invariably attributed to the wrong person (Stigler, “Who Discovered Bayes’ Theorem?”. In Stigler, Statistics on the Table, 1999).
Now, since we have seen that Bayes’ theorem follows from the axioms of set theory plus the definition of “conditional probability”, the following comments from a recent tutorial text on Bayes’ theorem can only be interpreted as being odd. The following quote is from James V. Stones’ Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis (3rd printing, Jan. 2014).
If we had to establish the rules for calculating with probabilities, we would insist that the result of such calculations must tally with our everyday experience of the physical world, just as surely as we would insist that 1+1 = 2. Indeed, if we insist that probabilities must be combined with each other in accordance with certain common sense principles then Cox (1946) showed that this leads to a unique set of rules, a set which includes Bayes’ rule, which also appears as part of Kolmogorov’s (1933) (arguably, more rigorous) theory of probability (Stone: pp. 2-3).
Bayes’ theorem does not form part of Kolmogorov’s set of axioms. Strictly speaking, Bayes’ rule must be viewed as a logical consequence of the axioms of set theory, the Kolmogorov axioms of probability, and the definition of “conditional probability”.
Whether Kolmogorov’s axioms for probability tally with our experience of the real world is another question. The axioms are sometimes used as indications of non-rational thought processes in certain psychological experiments, such as the Linda experiment by Tversky and Kahneman. (For an alternative interpretation of this experiment that brings into question the assumption that people either do or should reason according to a simple application of the Kolmogorov axioms, cf. Luc Bovens & Stephan Hartmann, Bayesian Epistemology, 2003: 85-88).
A matter of interpretation
In the discussion above, the particular set theory and the Kolmogorov axioms mentioned and used were interpreted via the first-order extensional predicate calculus. This means that both theories can be viewed as not involving intensional contexts such as beliefs. The probability axioms in particular were understood by Kolomogorov and others using them as relating to objective frequencies and applicable to the real world, not to beliefs we might have about the world. For instance, an unbiased coin and die, in the ideal case admittedly, are considered to have a .5 and 1/6 (or .1666) probability for the side of the coin and a side of a six-sided die, respectively, on each flip or throw of the object in question. In these two particular cases, it is only via behavior observed over a long period of time that can produce data that will show whether in fact our assumption that the coin and the die are unbiased is true or not.
Why does this matter. Simply because Bayes’ theorem has been interpreted in two distinct ways – as a descriptively objective statement about the character of the world and as a subjective statement about a users’ beliefs about the state of the world. The derivation above derives from two theories that are considered to be non-subjective in character. One can then reasonably ask: where does the subjective interpretation of Bayes’ theorem come from? Two answers suggest themselves, though these are not the only ones. One is that Bayes’ theorem is arrived at via a different derivation than the one I considered, relying, say, on a different notion of probability than that of Kolmogorov’s. The other is that Bayesian subjectivity is introduced by means of the stroke (or ‘given’) operator, |.
Personally, I see nothing subjective about statements concerning the probability of obtaining a H or a T on the flip of a coin as being .5 or that of obtaining one particular side of a 6-sided die being .166. These probabilities are about the objects themselves, and not about our beliefs concerning them. Of course, this leaves open the possibility of alternative interpretations of probabilities in other contexts, say the probability of guilt or non-guilt in a jury trial. Whether the notions of probability involving coins or dice are the same as those involving situations such as jury trials is a matter for further debate.
Read this book: Ann Pettifor, Just Money February 24, 2014Posted by Larry in economics, money.
add a comment
After you finish reading this short book, devoted to explicating the nature and function of money in modern societies, you will be able to see not only just that almost everything that Osborne and Balls say about this aspect of the economy is wrong, but why it is is wrong. The intended audience of this book is that group of people who don’t quite understand what money is or how it works, which unfortunately includes both Osborne and Balls, so it is written in as jargon-free and elementary a manner as possible. It is in Chapter 4 where she really drives the knife into the neoclassical acolytes and their financial backers.
In reading this book, you will obtain a greater understanding of what is wrong with the austerity policies of this and other governments and what to do about it. I would recommend reading Just Money before going on to other more complicated, but also reasonably accessible, books like that of Randall Wray’s Modern Money Theory. Just Money and Modern Money Theory, in that order, the latter going beyond the former to cover the entire economic system, will tell you almost everything you need to know in order to understand what is wrong with contemporary economic policies. How they can be so wrong and their justifications so misguided is sometimes difficult to believe.
Also, the subject is not easy, but Ann Pettifor knows that and does her best to explicate what is difficult to explicate. In order to see that most journalists also fail to clearly understand the function of money in our society, even when that is supposedly their specialty, this book is a must read. It is available in pdf format from PRIME or in Kindle format from Amazon.
Saving the Hypothesis August 22, 2013Posted by Larry in economics.
add a comment
An Inherent Complication in Assessing Tests of Scientific Hypotheses
The testing of scientific hypotheses is not as straightforward as it often looks and perhaps contributes to the fact that many politicians do not pay as much attention to the character and context of scientific evidence as they should.
Most of us are familiar with the probabilistic character of many scientific hypotheses and the ways in which these can affect the testing scenarios. But what is not as well-known is a strategy for “saving the hypothesis” that is independent of probabilistic considerations. This strategy makes the testing of scientific hypotheses more complicated than Karl Popper and others thought it was.
The methodology described briefly by the reviewer is that developed by Karl Popper, which is that scientists should concentrate on falsifying their theories rather than corroborating them. Many ordinary scientists use Popper’s falsification model as their guide. Effectively, what you do is list the initial conditions, and the lawful regularities, and in the case of the more formalized sciences, you deduce consequences from these, for instance a particular event – a procedure that has been discussed in physics and in other more formalized disciplines. If the event predicted by the theory under test does not occur, that is, is not observed, then Popper concludes that the theory has been decisively falsified. If the event does occur and has been observed, the theory has only been corroborated, that is, the likelihood that it is true has increased. For strictly logical reasons, no theory can ever be conclusively verified, that is, proved to be true, though Popper never explicitly points this out.
There is, however, a kind of get out of jail free card that can be used to “save the hypothesis”. This is known as the Duhem-Quine gambit. When you set up your theory testing scenario, in addition to listing all the initial conditions that apply and the regularities that are involved, there are a number of auxiliary hypotheses that usually go unstated with respect to the testing scenario. In physics, this is because experimental physicists often know what these possible contaminants of the testing process are and attempt to control for them. In less formalized sciences, such influences may be unknown or only informally considered. The Duhem-Quine gambit, when utilized appropriately, is a legitimate procedure, not an attempt to cheat the testing process.
Basically, if the result of the test of a given scientific hypothesis is not observed, for whatever reason, instead of directly falsifying the hypothesis or theory under test, the testers can select one or more of the auxiliary hypotheses as the culprit, such as the nature and operation of the equipment, any presumed biases on the part of the testers, sampling problems, the general environment in which the test takes place, and so forth. Selecting one or more of the auxiliary hypotheses that inevitably accompany any scientific testing situation will enable the scientist to “save the hypothesis”. Of course, one cannot continue to blame failure to obtain a result the theory says should have been observed but wasn’t on the auxiliary hypotheses. Continued failure to observe a result predicted by a theory must eventually lead to that theory (or a portion of it) being considered to be falsified.
Elliman’s point that more scientists should be involved in independent ways in policy decisions not subject to direct ministerial control becomes even more salient in light of the inevitable availability of the Duhem-Quine gambit, a central feature of any scientific testing scenario, which renders assessment of the relevance of scientific evidence for policy purposes even more problematic than it might otherwise be considered to be.
Apostrophe Quiz & probs w/ WordPress August 16, 2013Posted by Larry in economics.
add a comment
I had thought I would gloat a little over getting 100% on the Independent’s apostrophe quiz. But most everyone else who took the quiz got most of them right. So, either it wasn’t very hard, which is difficult to judge, or there is a self-selection bias with respect to those taking the quiz. Anyway, here is the link so you can, if you like, also take the quiz. That is, if like me, you are an apostrophe nut.
You can also see how others have done.
By the way, the hyperlink function isn’t working in WordPress, so you will have to copy and paste the above link to to directly to the relevant page. Sorry. I can’t seem to add Media either.
When I published the post, the link did work. But the Media button still didn’t. Hmmm.
Some physicists should stick to their lasts July 22, 2013Posted by Larry in economics.
add a comment
Here is one of the reasons I think some physicists should recognize their limitations. Forshaw is one of them. Should you have the time to peruse the article, you will find this:
“The most famous equation in finance was published in 1972 and is named after American economists Fischer Black and Myron Scholes. The Black-Scholes equation provided a means to value “European options”, which is the right to buy or sell an asset at a specified time in the future. Remarkably, it is identical to the equation in physics that determines how pollen grains diffuse through water.”
While Forshaw is right to claim that this is a famous equation, he neglects to mention that it is false in its intended application/interpretation. That it is identical to a physics formula rather undercuts the reasons for giving the authors the Swedish Bank prize. For your convenience, here is their formula.
Now, if the market conformed to two of their assumptions, one being central, that of the Gaussian character of the underlying distribution, all might be well. But market distributions are well known by many to be, and have been shown to be, non-normal by Mandelbrot more than 20 years ago. The Cauchy-Mandelbrot distribution, in fact, has no calculable mean or variance, hence no calculable volatility. Which makes it entirely useless for its intended purpose. Thorp came up independently with a similar set of equations to those of Bachelier, mentioned by Forshaw.
However, Thorp’s were initially developed for Blackjack, a situation more similar to the pollen example than that conceived by BSM. He was banned by every casino in Vegas for his trouble. But that didn’t bother him, as he had adequately tested his theory. Or perhaps I should say in this context, his policy recommendations on how to play Blackjack and win. (His recommendations require the player to compute large ratios in their heads, something most of us can’t do. Using a calculator is banned, as they think you’re counting cards which is a no-no.)
I am presuming Forshaw is thinking of Brownian motion, or something along those lines, which makes pollen distribution in a liquid more or less random and approximate a normal curve. Neither of which is true of the market.
What does this sort of thinking say about the burgeoning field of econophysics? Not much that is positive.
Just to show I am being even-handed, here is an assessment of market behavior by a physicist and a financial economist, Vasquez and Farinelli. Here is the link:
http://arxiv.org/pdf/0908.3043v1.pdf. They argue that there is geometric curvature and, therefore, path dependence in real market data, something anathema to the neoclassical paradigm.