jump to navigation

Piketty & the recent US Supreme Court decision April 3, 2014

Posted by Larry in economics, Justice, Philosophy.
add a comment
Uh oh!
http://www.salon.com/2014/04/02/supreme_courts_abomination_how_mccutcheon_decision_will_destroy_american_politics/
It is looking worse for the 99% every week in the US, and eo ipso potentially in the UK. The piece linked to is by a law prof at University of Colorado at Boulder. Piketty’s Capital in the 21st Century gets a mention in this context, and it is relevant. I will quote from Campos: Piketty argues that

“…[I]n the long run, the return on capital tends to be greater than the growth rate of the economies in which that capital is located.

What this means [concludes Campos] is that in a modern market economy the increasing concentration of wealth in the hands of the already-rich is as natural as water flowing downhill, and can only be ameliorated by powerful political intervention, in the form of wealth redistribution via taxes, and to a lesser extent laws that systematically protect labor from capital. (Piketty argues that, because of historical circumstances that are unlikely to be repeated, this sort of intervention happened in the Western world in general, and in America in particular, between the First World War and the early 1970s.)”

This is as pessimistic as it gets. What is not mentioned in most discussions of this outcome is the view put forward a few years ago by Republicans that corporations are individuals, that is, people. This is a logical howler of immense proportions, but in this context, such fine distinctions seem to be irrelevant. It is this view that partially underpins the Supreme Court decision that,  just like people, corporations have free speech rights that need protection. This apparently includes the way that their CEOs and board members spend the corporation’s money.

Piketty’s data is extensive and runs for hundreds of years. It is a door-stopper of a book, around 600 pages of text and 100 pages of notes, not including the material Piketty has placed online. One could be forgiven for concluding that the greater the economic inequality, the greater the chance of political plutocracy and that this is the inevitable consequence of the political implementation of neoclassical economic principles. This relationship seems to me to inevitably entail that politics can not be divorced from economics, which is a central tenet of the neoclassical paradigm. It seems to follow, therefore, as the night the day, that since politics can never be value-free, the idea that economics can be, as claimed by the neoclassical econs, is a non-starter.

I would like to think that this constitutes another nail in the coffin of the neoclassical paradigm, but it doesn’t look like it. Despite the mounting evidence against it, the dominant economic paradigm rumbles on with hardly a hesitation along the way.

Also have a look at this and the links therein: http://www.slate.com/blogs/moneybox/2014/04/02/wealth_inequality_is_it_worse_than_we_thought.html. It is about new research by Saez and Zucman on US wealth disparity. As has been pointed out previously by researchers, it isn’t the 1% who are the biggest gainers from this financial crisis, it is the 0.1%, the 1/10th of 1%.

Varoufakis on game theoretic analysis of asymmetric expectations among the powerful & the powerless March 22, 2014

Posted by Larry in economics, Game Theory, Psychology, social justice.
add a comment

This is for those who have some experience, knowledge or interest in game theoretic analyses of social interaction. Varoufakis is a political economist who is intimately familiar with game theory, having written a book about it with Hargreaves Heap some years ago. Nevertheless, economically, this research falls within the domain of what has become known as microeconomics, as distinct from macroeconomics. But this research focuses on social group expectations, not the processes of the economic system as a whole.

There are links in the paper to more detailed and mathematical discussions of the issues. I don’t think that the PowerPoint slides are self-explanatory, as they assume more understanding of the mathematics underlying game theory than is perhaps possessed by the general population. But the conclusions Varoufakis reaches require no math to understand. They are:

Varoufakis-ConclusionsEvolutionaryGameTheoryExperiments-Spr2014

There are a number of books and articles dealing with applications of game theory to biological situations, one famous one by the late John Maynard Smith, Evolution and the Theory of Games, mentioned by Varoufakis. I think I should point out that Richard Lewontin, the population geneticist, has written that he thought that the appropriation of game theory to evolutionary contexts altered the character of the theory to such an extent that it ought to be called something other than “evolutionary game theory”. History and social convention have made the decision for him. The appellation has stuck.

To be fair to Varoufakis, he doesn’t think that these results are extraordinarily original, though they would seem to be unknown to mainstream economists, whose theoretical paradigm Varoufakis is concerned to attack. And he feels that one of the best ways of doing this is from within, as it were, a tactic utilized successfully by logicians for centuries to attack positions they don’t (didn’t) like. (Have a look at the way Socrates, as set out by Plato, operates.)

Here is the link to a non-technical discussion of this topic, including links mentioned.

http://yanisvaroufakis.eu/2014/03/21/how-do-the-powerful-get-the-idea-that-they-deserve-more-lessons-from-a-laboratory/

Reprise

Getting back to my simple example of being able to breathe, we know that the presence of oxygen is a necessary though not sufficient for breathing. We also know that the presence of CO2 is a necessary but not sufficient condition for breathing.

So, here we have two necessary conditions for breathing. For B for breathing, O for oxygen, and C for CO2, we have (if B, then O) and (if B, then C). Since we know that they must occur together for a person being able to breathe, we have (if B, then C & O).

Realizing that the system under discussion is more complex than this discussion, can we nevertheless go on to contend that the presence of oxygen and CO2 are individually necessary and jointly sufficient for breathing? That is, that B iff C & O? How should we then interpret this equivalence? As a kind of “law of breathing”, a kind of scientific regularity, or as a definition of the conditions of being able to breathe?

This question may seem to be a triviality, but it arises in the discipline of macroeconomics all the time. When is a statement to be interpreted as a substantive assertion that has a truth-value or as a definition of terms, which has no truth-value but is only useful or not? Too many economic discussions are not at all clear about this issue. And it can make a difference to how you treat what they are saying.

Necessary & Sufficient conditions: A Medical Example March 22, 2014

Posted by Larry in Logic, Medicine, Science.
add a comment

A sufficient condition A for B is one where A being the case is sufficient for bringing about B.

A necessary condition B for A is one where if B is not the case, then A won’t be either.

Logically it looks like this. A is sufficient for B: if A, then B.

B is necessary for A: if not-B, then not-A.

Ex. of a necessary condition: oxygen (B) is necessary for being able to breathe (A). Therefore, if not-B, then not-A.

It is easier to come up with necessary conditions than it is sufficient conditions. For instance, what is sufficient for being able to breathe?

A set of necessary and sufficient conditions for A is often considered to be equivalent to, or for, A.

A slightly more realistic and complicated way of expressing this set of relationships is this. A is sufficient for D and B is necessary for D. This translates to (if A, then D) and (if D, then B). The contrapositive of each yields (if not-D, then not A) and (if not-B, then not-D). It is relatively clear, I think, that A and B each have a distinct relationship to D (I am ignoring the issue of transitivity illustrated in the example.). The potential complexity of this relationship is borne out is the following medical example from research into the dementias.

Here is a quote from a medical investigation of causes of Alzheimer’s and other dementias (from Daily Kos).

“Researchers have found that a protein active during fetal brain development, called REST, switches back on later in life to protect aging neurons from various stresses, including the toxic effects of abnormal proteins. But in those with Alzheimer’s and mild cognitive impairment, the protein — RE1-Silencing Transcription factor — is absent from key brain regions.”

“Our work raises the possibility that the abnormal protein aggregates associated with Alzheimer’s and other neurodegenerative diseases may not be sufficient to cause dementia; you may also need a failure of the brain’s stress response system,” said Bruce Yankner, Harvard Medical School professor of genetics and leader of the study, in a release.”

While the situation is more complicated than the simple example I gave initially, the logic is the same.

From the quote, we have: protein aggregate A; failure brain stress response system B; absence of RE1 (=R); dementia D.

Second paragraph of the above quote may be saying that A & B is sufficient for D.

But from the first paragraph, we also have absence of R (RE1) as a necessary condition for D. I.e., if D, then not-R (absence of RE1).

So, A&B is sufficient for D, hence (if A&B, then D). But, not-R is necessary for D. Or, equivalently, (if R, then not-D). I.e., R is sufficient for not-D.

Similarly as in the simple example: A, B, and R are related in a complex way to D, a relationship that is not entirely spelled out in the quote.

We are, therefore, left with an important question: what is status of A & B with respect to R? Is R a component of either A or B? The quote doesn’t link all these factors together. While this may be obvious from the quote, the logical situation underlying this hiatus may not be clear. Hopefully, it now is and also clearer what additional relationships need to be explored in order to lead to better understanding of the dementias, particularly Alzheimer’s, and thereby better control of their onset and progression if not complete defeat.

Bayes’ theorem: a comment on a comment March 10, 2014

Posted by Larry in Bayes' theorem, Logic, Philosophy, Statistics.
add a comment

Assume the standard axioms of set theory, say, the Zermelo-Fraenkel axioms.

Then provide a definition of conditional probability:

1. BayesEq1, which yields the identity

1a. BayesEq2, via simple algebraic cross multiplication.

Because set intersection is commutative, you can have this:

1b. BayesEq3, which equals

2. BayesEq4.

What we have here is a complex, contextual definition relating a term, P, from probability theory with a newly introduced stroke operator, |, read as “given”, so the locution becomes, for instance, the probability, P, of A given B. Effectively, the definition is a contextual definition of the stroke operator, |, “given”.

Although set intersection (equivalent in this context to conjunction) is commutative, conditional probability isn’t, which is due to the asymmetric character of the stroke operator, |. This means that, in general, P(A|B) ≠ P(B|A). If we consider the example of Data vs. Hypothesis, we can see that in general, for A = Hypothesis and B = Data, that P(Hypothesis|Data) ≠ P(Data|Hypothesis).

Now, from the definition of “conditional probability” and the standard axioms of set theory which have already been implicitly used, we obtained Bayes’ theorem trivially, mathematically speaking, via a couple of simple substitutions.

BayesEq5.

Or the Bayes-Laplace theorem, since Laplace discovered the rule independently. However, according to Stigler’s rule of eponymy in mathematics, theorems are invariably attributed to the wrong person (Stigler, “Who Discovered Bayes’ Theorem?”. In Stigler, Statistics on the Table, 1999).

Now, since we have seen that Bayes’ theorem follows from the axioms of set theory plus the definition of “conditional probability”, the following comments from a recent tutorial text on Bayes’ theorem can only be interpreted as being odd. The following quote is from James V. Stones’ Bayes’ Rule: A Tutorial Introduction to Bayesian Analysis (3rd printing, Jan. 2014).

If we had to establish the rules for calculating with probabilities, we would insist that the result of such calculations must tally with our everyday experience of the physical world, just as surely as we would insist that 1+1 = 2. Indeed, if we insist that probabilities must be combined with each other in accordance with certain common sense principles then Cox (1946) showed that this leads to a unique set of rules, a set which includes Bayes’ rule, which also appears as part of Kolmogorov’s (1933) (arguably, more rigorous) theory of probability (Stone: pp. 2-3).

Bayes’ theorem does not form part of Kolmogorov’s set of axioms. Strictly speaking, Bayes’ rule must be viewed as a logical consequence of the axioms of set theory, the Kolmogorov axioms of probability, and the definition of “conditional probability”.

Whether Kolmogorov’s axioms for probability tally with our experience of the real world is another question. The axioms are sometimes used as indications of non-rational thought processes in certain psychological experiments, such as the Linda experiment by Tversky and Kahneman.  (For an alternative interpretation of this experiment that brings into question the assumption that people either do or should reason according to a simple application of the Kolmogorov axioms, cf. Luc Bovens & Stephan Hartmann, Bayesian Epistemology, 2003: 85-88).

A matter of interpretation

In the discussion above, the particular set theory and the Kolmogorov axioms mentioned and used were interpreted  via the first-order extensional predicate calculus. This means that both theories can be viewed as not involving intensional contexts such as beliefs. The probability axioms in particular were understood by Kolomogorov and others using them as relating to objective frequencies and applicable to the real world, not to beliefs we might have about the world. For instance, an unbiased coin and die, in the ideal case admittedly, are considered to have a .5 and 1/6 (or .1666) probability for the side of the coin and a side of a six-sided die, respectively, on each flip or throw of the object in question. In these two particular cases, it is only via behavior observed over a long period of time that can produce data that will show whether in fact our assumption that the coin and the die are unbiased is true or not.

Why does this matter. Simply because Bayes’ theorem has been interpreted in two distinct ways – as a descriptively objective statement about the character of the world and as a subjective statement about a users’ beliefs about the state of the world. The derivation above derives from two theories that are considered to be non-subjective in character. One can then reasonably ask: where does the subjective interpretation of Bayes’ theorem come from? Two answers suggest themselves, though these are not the only ones. One is that Bayes’ theorem is arrived at via a different derivation than the one I considered, relying, say, on a different notion of probability than that of Kolmogorov’s. The other is that Bayesian subjectivity is introduced by means of the stroke (or ‘given’) operator, |.

Personally, I see nothing subjective about statements concerning the probability of obtaining a H or a T on the flip of a coin as being .5 or that of obtaining one particular side of a 6-sided die being .166. These probabilities are about the objects themselves, and not about our beliefs concerning them. Of course, this leaves open the possibility of alternative interpretations of probabilities in other contexts, say the probability of guilt or non-guilt in a jury trial. Whether the notions of probability involving coins or dice are the same as those involving situations such as jury trials is a matter for further debate.

Read this book: Ann Pettifor, Just Money February 24, 2014

Posted by Larry in economics, money.
add a comment

After you finish reading this short book, devoted to explicating the nature and function of money in modern societies, you will be able to see not only just that almost everything that Osborne and Balls say about this aspect of the economy is wrong, but why it is is wrong. The intended audience of this book is that group of people who don’t quite understand what money is or how it works, which unfortunately includes both Osborne and Balls, so it is written in as jargon-free and elementary a manner as possible. It is in Chapter 4 where she really drives the knife into the neoclassical acolytes and their financial backers.

In reading this book, you will obtain a greater understanding of what is wrong with the austerity policies of this and other governments and what to do about it. I would recommend reading Just Money before going on to other more complicated, but also reasonably accessible, books like that of Randall Wray’s Modern Money Theory. Just Money and Modern Money Theory, in that order, the latter going beyond the former to cover the entire economic system, will tell you almost everything you need to know in order to understand what is wrong with contemporary economic policies. How they can be so wrong and their justifications so misguided is sometimes difficult to believe.

Also, the subject is not easy, but Ann Pettifor knows that and does her best to explicate what is difficult to explicate. In order to see that most journalists also fail to clearly understand the function of money in our society, even when that is supposedly their specialty, this book is a must read. It is available in pdf format from PRIME or in Kindle format from Amazon.

What do the elites think they are doing? Or are they thinking at all? February 13, 2014

Posted by Larry in Abuse of power, economics, Justice, social justice.
add a comment

If aggression and nastiness have been the drivers of natural selection that put humans at the top of the food chain, then maybe it is time we became extinct. Keanu Reeves made a great mistake in the film The Day the Earth Stood Still. He probably should have allowed the machine to do what it was brought here to do, exterminate the earth’s primary predator in order to save the planet itself.

When people look around and think about what is happening, some of them no doubt wonder what is wrong with the picture they see. There is something deeply awry. But only a perfect cataclysm will alter how the system currently works. That it used to work a little better was probably just an accident. The past thirty years was spent, more or less, working up to now, only of course it wasn’t supposed to go to shit. Which just shows you how stupid many members of  the élite (I’m lumping them all together.) actually are. And the funny thing is, if there is anything funny in this madness,  is that small government means that there won’t be the resources to bail out some of the asses the next time. And by next time, I don’t mean the housing bubble that will burst one of these days. I mean that they’ll do something so monumentally stupid that this past crash will look like a bit of a blip.

So far, the assholes are winning. And those who see that the economic system is in bad shape, fails to reflect actual economic reality, and needs to be reformed hardly get any air time on the mainstream media. Last week’s Question Time had George Galloway, MP for Bradford, a notoriously poor city, on the program. Galloway is a “deserter” from the Labour Party, which for the past 20 years hasn’t done much for its constituency, the average person including the working person. Galloway is someone the main parties love to hate. But last week, he uttered something quite profound and I doubt most people realized how profound it was. In response to some Tory on the panel saying that there weren’t funds for something involving poor people, Galloway pointed out that the government has found money for Trident, high-speed rail, the banks, and their friends in general. In other words, it is complete bullshit that the government is spending constrained.

In the US, it is almost the same. The governments of the US and the UK, which are quite large, can fund whatever they need to fund in their own currency. A related issue was being discussed on the program when Galloway’s comment was made. Neither of these governments can go broke. Of course, if they are much smaller than they are now, they won’t be able to go broke either, but they won’t have the internal resources, which the UK government is slowly dismantling, to be able to do much for anyone, even the rich. The ONS now admits that there are statistical data that it no longer has the resources to collect and analyze. And this is only one example. So, the élite is now cutting off its nose to spite its face, as it were. Except for the .1% and a few others who think they don’t need government anyway. They think they can buy whatever they need. And under the present regime, they can. But are they going to fund the education system or the various research systems that allow them to be able to do this when government becomes too small to support these activities? No matter how rich they are, they won’t have the resources to do it. They are marching spiritedly, and ignorantly, into the past.

The current elites are parasites destroying their own host.

Saving the Hypothesis August 22, 2013

Posted by Larry in economics.
add a comment

An Inherent Complication in Assessing Tests of Scientific Hypotheses

L Brownstein

The testing of scientific hypotheses is not as straightforward as it often looks and perhaps contributes to the fact that many politicians do not pay as much attention to the character and context of scientific evidence as they should.

Most of us are familiar with the probabilistic character of many scientific hypotheses and the ways in which these can affect the testing scenarios. But what is not as well-known is a strategy for “saving the hypothesis” that is independent of probabilistic considerations. This strategy makes the testing of scientific hypotheses more complicated than Karl Popper and others thought it was.

The methodology described briefly by the reviewer is that developed by Karl Popper, which is that scientists should concentrate on falsifying their theories rather than corroborating them. Many ordinary scientists use Popper’s falsification model as their guide. Effectively, what you do is list the initial conditions, and the lawful regularities, and in the case of the more formalized sciences, you deduce consequences from these, for instance a particular event – a procedure that has been discussed in physics and in other more formalized disciplines. If the event predicted by the theory under test does not occur, that is, is not observed, then Popper concludes that the theory has been decisively falsified. If the event does occur and has been observed, the theory has only been corroborated, that is, the likelihood that it is true has increased. For strictly logical reasons, no theory can ever be conclusively verified, that is, proved to be true, though Popper never explicitly points this out.

There is, however, a kind of get out of jail free card that can be used to “save the hypothesis”. This is known as the Duhem-Quine gambit. When you set up your theory testing scenario, in addition to listing all the initial conditions that apply and the regularities that are involved, there are a number of auxiliary hypotheses that usually go unstated with respect to the testing scenario. In physics, this is because experimental physicists often know what these possible contaminants of the testing process are and attempt to control for them. In less formalized sciences, such influences may be unknown or only informally considered. The Duhem-Quine gambit, when utilized appropriately, is a legitimate procedure, not an attempt to cheat the testing process.

Basically, if the result of the test of a given scientific hypothesis is not observed, for whatever reason, instead of directly falsifying the hypothesis or theory under test, the testers can select one or more of the auxiliary hypotheses as the culprit, such as the nature and operation of the equipment, any presumed biases on the part of the testers, sampling problems, the general environment in which the test takes place, and so forth. Selecting one or more of the auxiliary hypotheses that inevitably accompany any scientific testing situation will enable the scientist to “save the hypothesis”. Of course, one cannot continue to blame failure to obtain a result the theory says should have been observed but wasn’t on the auxiliary hypotheses. Continued failure to observe a result predicted by a theory must eventually lead to that theory (or a portion of it) being considered to be falsified.

Elliman’s point that more scientists should be involved in independent ways in policy decisions not subject to direct ministerial control becomes even more salient in light of the inevitable availability of the Duhem-Quine gambit, a central feature of any scientific testing scenario, which renders assessment of the relevance of scientific evidence for policy purposes even more problematic than it might otherwise be considered to be.

Apostrophe Quiz & probs w/ WordPress August 16, 2013

Posted by Larry in economics.
add a comment

I had thought I would gloat a little over getting 100% on the Independent’s apostrophe quiz. But most everyone else who took the quiz got most of them right. So, either it wasn’t very hard, which is difficult to judge, or there is a self-selection bias with respect to those taking the quiz. Anyway, here is the link so you can, if you like, also take the quiz. That is, if like me, you are an apostrophe nut.

http://www.independent.co.uk/voices/iv-drip/whos-theyre-youre-its-the-apostrophe-quiz-8770828.html

You can also see how others have done.

By the way, the hyperlink function isn’t working in WordPress, so you will have to copy and paste the above link to to directly to the relevant page. Sorry.  I can’t seem to add Media either.

When I published the post, the link did work. But the Media button still didn’t. Hmmm.

Some physicists should stick to their lasts July 22, 2013

Posted by Larry in economics.
add a comment

Here is one of the reasons I think some physicists should recognize their limitations. Forshaw is one of them. Should you have the time to peruse the article, you will find this:

“The most famous equation in finance was published in 1972 and is named after American economists Fischer Black and Myron Scholes. The Black-Scholes equation provided a means to value “European options”, which is the right to buy or sell an asset at a specified time in the future. Remarkably, it is identical to the equation in physics that determines how pollen grains diffuse through water.”

While Forshaw is right to claim that this is a famous equation, he neglects to mention that it is false in its intended application/interpretation. That it is identical to a physics formula rather undercuts the reasons for giving the authors the Swedish Bank prize. For your convenience, here is their formula.

 BlackSholesMertonOptionsFormula

Now, if the market conformed to two of their assumptions, one being central, that of the Gaussian character of the underlying distribution, all might be well. But market distributions are well known by many to be, and have been shown to be, non-normal by Mandelbrot more than 20 years ago. The Cauchy-Mandelbrot distribution, in fact, has no calculable mean or variance, hence no calculable volatility. Which makes it entirely useless for its intended purpose. Thorp came up independently with a similar set of equations to those of Bachelier, mentioned by Forshaw.

However, Thorp’s were initially developed for Blackjack, a situation more similar to the pollen example than that conceived by BSM. He was banned by every casino in Vegas for his trouble. But that didn’t bother him, as he had adequately tested his theory. Or perhaps I should say in this context, his policy recommendations on how to play Blackjack and win. (His recommendations require the player to compute large ratios in their heads, something most of us can’t do. Using a calculator is banned, as they think you’re counting cards which is a no-no.)

I am presuming Forshaw is thinking of Brownian motion, or something along those lines, which makes pollen distribution in a liquid more or less random and approximate a normal curve. Neither of which is true of the market.

What does this sort of thinking say about the burgeoning field of econophysics? Not much that is positive.

Here is the link: http://www.guardian.co.uk/science/2013/jul/21/physics-graduates-gravitate-to-finance.

Just to show I am being even-handed, here is an assessment of market behavior by a physicist and a financial economist, Vasquez and Farinelli. Here is the link:

http://arxiv.org/pdf/0908.3043v1.pdf. They argue that there is geometric curvature and, therefore, path dependence in real market data, something anathema to the neoclassical paradigm.

Theme for a Neo-Thatcherite World, Classical version: Albinoni’s Adagio in Gm May 1, 2013

Posted by Larry in Culture, economics, Music as Metaphor.
add a comment

This version was posted on youtube by José Manuel Rosón. The beautiful photos belie the music’s underlying pathos and despair, and to this end it is essential that the tempo makes the piece last around 12 minutes, essential in order to achieve the composer’s funereal effect.  This version is by Karajan (in his middle period I think). I realize that Karajan was a member of the Hitler Jugend. But a number of us agonized over this for a long time before deciding that Karajan was more of a narcissistic opportunist than a Nazi. Besides, the Berlin Phil is a great orchestra.

Enjoy, if that is the right term.

Follow

Get every new post delivered to your Inbox.

Join 49 other followers

%d bloggers like this: