Tuesday, 30 June 2015

What if Uber goes unter?



Recently, a California court ruled that Uber has to treat its drivers as employees, with all the regulatory costs that entails. Most people think that this will hamper Uber a bit but not kill it. But a few, like Megan McArdle, think that the ruling spells Uber's demise. What if McArdle is right? What do we conclude?

First of all, it's important to point out that Uber might die for reasons totally unrelated to the California decision. Companies die all the time for reasons totally unrelated to regulation. Recent financial statements show Uber taking a pretty big loss at some point in the recent past, which might mean that competition has been a lot stiffer than expected. So if Uber dies, disentangling causality will be very difficult.

But IF the California ruling, and others like it, are what put a stake through Uber's heart, then I think we conclude two things:

1. Uber wasn't actually that amazing of an idea.

2. Our labor regulation is too stringent.

Why do we conclude #1? Because there are lots of ideas that absorb the cost of labor regulations and manage to keep on turning a profit. Wal-Mart does it. McDonald's does it. If you can't even clear that hurdle, your idea wasn't really creating that much value.

Why do we conclude #2? Because Uber is providing lots of people with work. Many people who would not otherwise be driving taxis are now becoming Uber drivers. That they are choosing to do this means that Uber is good for labor markets. In the interests of improving our labor markets, we should reduce regulations that keep people from doing jobs they'd be willing to do, as long as those jobs are safe and meet other minimum standards of quality (such as paying overtime). Assuming that Uber driving is a safe job that meets minimum standards of quality - which I'm willing to assume - we don't want to regulate the job out of existence. 

I suspect that neither (1) nor (2) is true. I suspect that Uber actually creates more than a tiny sliver of value, with its network effect and its circumvention of the local monopoly of taxicabs. And I also suspect that American labor regulations are not so onerous that they are putting large numbers of people out of a job.

Thus, I predict that the California ruling will not kill Uber. Uber may still die of other causes, but I don't think that being forced to call its employees "employees" will do it in.

Saturday, 27 June 2015

I.Q. and the Wealth of States



One of the simplest theories of human prosperity is the idea that societal wealth comes from an intelligent populace. Obviously this is true to some degree; if you went around and forced everyone in the country to take a bunch of brain-killing drugs, economic activity would definitely decline. The question is how much this currently matters on the margin.

Some people think it matters a lot. Richard Lynn, a British psychologist, wrote a book called I.Q. and the Wealth of Nations, suggesting that average population I.Q. drives differences in national wealth. Garett Jones of George Mason University is writing a book called Hive Mind that suggests much the same thing, asserting that there are production externalities associated with high I.Q. Motivated by this hypothesis, there is a line of research in development economics dedicated to finding interventions that boost population I.Q.

Well, here is some new and relevant evidence. Eric A. Hanushek, Jens Ruhose, and Ludger Woessmann have a new NBER working paper in which they look at U.S. states. From the abstract:
In a complement to international studies of income differences, we investigate the extent to which quality-adjusted measures of human capital can explain within-country income differences. We develop detailed measures of state human capital based on school attainment from census micro data and on cognitive skills from state- and country-of-origin achievement tests. Partitioning current state workforces into state locals, interstate migrants, and immigrants, we adjust achievement scores for selective migration...We find that differences in human capital account for 20-35 percent of the current variation in per-capita GDP among states, with roughly even contributions by school attainment and cognitive skills. Similar results emerge from growth accounting analyses.
Note that the authors control for selective immigration, an oft-neglected factor in debates about I.Q.

So the upper bound for the amount of state income differences that can be explained by population I.Q. differences is about a third. If we assume that achievement scores are a good measure of I.Q. and that school attainment doesn't improve I.Q. very much, then the number goes down to about one-sixth.

Now, it's important to remember that this study, well-executed though it is, doesn't isolate causation. It doesn't show the degree to which state average I.Q. can be raised by raising state income.

What it shows is that the vast majority of differences in state income are not due to variations in state average I.Q. If we had an I.Q.-boosting device, boosting the average I.Q. of Ohioans by 1% would raise Ohio's average income by at most around around 0.17%.

Of course, that's a marginal effect. If we boosted the average I.Q. of Ohioans by 400%, we might see much more (or much less) than a 68% increase in their income. And if we gave Ohioans brain-killing drugs (insert Ohio State football joke here) that cut their I.Q. in half, we might see much more (or much less) than an 8.5% decrease in state income.

But anyway, what this really shows is that there is Something Else that is driving state income differences. My personal guess is that this Something Else is mainly "external multipliers" from trade (the Krugman/Fujita theory). Institutions probably play a substantial role as well (the Acemoglu/Robinson theory). That's certainly relevant for the debate about different models of capitalism, where we often compare the U.S. to Scandinavia and other rich places.

In any case, this result should be sobering for proponents of I.Q. as the Grand Unified Theory of economic development. Average I.Q. is not unimportant for rich countries, and we should definitely try to raise it through better nutrition, education, and (eventually) brain-boosting technologies. And it still might matter a lot for some poor countries. But for rich countries, there are things that matter a lot more.


Update

Scott Alexander seems to think that my post gives a slanted interpretation of the results of this study - that if you present the numbers in a different way, they tell a very different story, and in fact imply that "IQ is everything after all." So just in case there was any ambiguity, let me give a concrete example of what this paper says about the impact of average population IQ on GDP.

Suppose you were to take the state of Ohio, and use an IQ-boosting device to boost Ohio's average IQ by 18 points. This paper predicts that Ohio's GDP would rise by 16 percent, or about $5,600 per person.

18 IQ points is the difference between the commonly reported average IQs of Mexico and South Korea, as listed in this table. A $5600 rise in GDP would take Ohio's per capita GDP from about the level of Italy to somewhere between the levels of France and Belgium (see here for those GDP numbers).

So I think this very clearly backs up my summary of the paper's result.

Sunday, 14 June 2015

Deirdre McCloskey Says Things

Some sadistic person or another referred me to this 51-page Deirdre McCloskey review of Thomas Piketty's book. I must remember to find who that person is and either play a mean prank on them in return, or demand that they buy me an expensive lunch. Fair is fair.

Deirdre McCloskey is the kind of writer who can take a perfectly fine sentence like "Capitalism has made humanity rich," and mutate it into a horror show like this:
Since those founding geniuses of Classical economics, a trade-tested betterment (a locution to be preferred to “capitalism,” with its erroneous implication that capital accumulation, not innovation, is what made us better off) has enormously enriched large parts of a humanity now seven times larger in population than in 1800, and bids fair in the next fifty years or so to enrich everyone on the planet.
I don't know about you, but I bid fair to give up well before page 51 of that locution.

But my main problem with Ms. McCloskey is not the poorly executed flowery baroque writing style, or even the reminder that plenty of people mistake flowery baroque writing for good writing. It's that McCloskey frequently makes declarations that are, to put it politely, in contradiction of the facts. She says these things with utmost confidence but without evidence or support, making it clear that the fact that she has said them is evidence enough. She argues from authority, and the authority is always herself.

This is NOT a post about Piketty or his arguments (of which I already have more than enough reason to be skeptical). It is NOT a post about McCloskey's rebuttal to those arguments. This is a post about McCloskey's style of argumentation.

Reading and critiquing McCloskey's thoughts on Piketty would be a bad move for me. First of all, it would require me to read dozens more pages of McCloskey than I have already read. Second, it would require me to know more about Piketty than I do (I haven't read Capital, nor do I own it). Third, it would turn the discussion political, which would detract from the main point of this post, which is that McCloskey is prone to silly-talk. Fourth, it would get very very very long, and you would get very very very bored.

So instead, I will simply critique the first three pages of the review, which are an introduction to the rest of the piece. McCloskey uses this introduction to praise Piketty, to compare him to physicists, and to insult most of the economics profession.

Here are nine excerpts that made my head explode:


1. p. 2:
[E]conomic history is one of the few scientifically quantitative branches of economics. In economic history, as in experimental economics and a few other fields, the economists confront the evidence (as they do not for example in most macroeconomics or industrial organization or international trade theory nowadays). 
And with a wave of her pen, Deirdre McCloskey dismisses the entire existence of the vast fields of empirical industrial organization, trade empirics, and empirical macro. Such is the power of argumentum ad verecundiam sui.

So I guess it was useless for Liran Einav, a Stanford economist who studies empirical IO, to write this in 2010:
The field of industrial organization has made dramatic advances over the last few decades in developing empirical methods for analyzing imperfect competition and the organization of markets. These new methods have diffused widely: into merger reviews and antitrust litigation, regulatory decision making, price setting by retailers, the design of auctions and marketplaces, and into neighboring fields in economics, marketing, and engineering. Increasing access to firm-level data and in some cases the ability to cooperate with governments in experimental research designs is offering new settings and opportunities to apply these ideas in empirical work.
After all, what does Einav know of his field? Deirdre McCloskey has said that Einav's field does not look at the evidence, and thus it is Truth.

Also, the Gravity Model of trade, often praised (by lesser lights, naturally) as one of the most empirically successful theories of all time, must now sadly be consigned to the graveyard, since Deirdre McCloskey has declared that trade theory fails to confront the evidence.


2. p. 2:
When you think about it, all evidence must be in the past, and some of the most interesting and scientifically relevant is in the more or less remote past... 
[Piketty] does not get entangled as so many economists do in the sole empirical tool they are taught, namely, regression analysis on someone else’s “data” (one of the problems is the very word data, meaning “things given”: scientists should deal in capta, “things seized”). 
Let's forgive the flamboyant vacuousness of the statement "When you think about it, all evidence must be in the past". Let's briefly mention the fact that that trivially true statement in no way implies the second part of the sentence. And let's move on to the fact that the two halves of the above quote are diametrically opposed to each other.

If scientists should seize "capta" instead of receiving "data", doesn't this make economic history unscientific? I mean, you can't do any experiments on history, can you? Are there any historical capta? McCloskey is barely finished praising her own field for looking at evidence when she scorns other fields for looking at very similar kinds of evidence!


3. p. 2-3:
Piketty constructs or uses statistics of aggregate capital and of inequality and then plots them out for inspection, which is what physicists, for example, also do in dealing with their experiments and observations. 
Physicists make graphs of things! Piketty makes graphs of things! Piketty is just like a physicist!

I wonder what else physicists do in dealing with their experiments and observations. Use computer software programs to display the statistics? Print out their plots on paper sheets for inspection? Sip coffee and check Twitter? I could be like a physicist too! Except I hate coffee, dammit.


4. p. 3:
Nor does [Piketty] commit the other sin, which is to waste scientific time on existence theorems. Physicists, again, don’t. If we economists are going to persist in physics envy let’s at least learn what physicists actually do. 
Wow, I'm glad that I have Deirdre McCloskey to tell me what physicists actually do. I'd hate to rely on an unreliable source like Google Scholar, who sneakily tries to convince me that physicists write papers with titles such as:

"Existence theorem for solitary waves on lattices"

"Vortex condensation in the Chern-Simons Higgs model: an existence theorem"

"General non-existence theorem for phase transitions in one-dimensional systems with short range interactions, and physical examples of such transitions"

"Existence theorem for solutions of Witten's equation and nonnegativity of total mass"

"A global existence theorem for the general coagulation–fragmentation equation with unbounded kernels"

"A Sharp Existence Theorem for Vortices in the Theory of Branes"

etc. etc. etc....

Thanks to Deirdre McCloskey's expansive sentence structure and snappish wit, I can safely assume that the 699,000 results for my Google Scholar search for "physics existence theorem" do not, in fact, exist (while the 417,000 results I get for "economics existence theorem" must be regarded as real). In addition, I can get a partial tuition reimbursement for the portion of my college physics education I spent watching professors prove existence theorems on the board.


5. p. 2:
[Piketty] does not commit one of the two sins of modern economics, the use of meaningless “tests” of statistical significance[.]
Is McCloskey unaware of the fact that physicists regularly use statistical significance testing, of the classic R.A. Fisher type?


6. p. 3:
Piketty stays close to the facts, and does not, say, wander into the pointless worlds of non-cooperative game theory, long demolished by experimental economics. 
Oh, right. Noncooperative game theory was demolished. Apparently Google and a bunch of other tech companies failed to get the memo when they hired auction theorists to design their online auctions for them.

Or perhaps by "demolished," McCloskey means "embraced by mathematicians, computer scientists, and engineers."

But DEIRDRE MCCLOSKEY SAYS THINGS, AND THUS THEY MUST BE TRUE!!


7. p. 3:
True, the book is probably doomed to be one of those more purchased than read...younger readers will remember Stephen Hawking’s A Brief History of Time (1988).
Deirdre McCloskey realizes that A Brief History of Time is only 212 pages long and has a lot of pictures, right?

It's always good to remember that just because you talk about books without having read them doesn't mean that everyone else does the same.


8. p. 4:
To be fair to Piketty, a buyer of the hardback rather than the Kindle edition is probably a more serious reader, and would go further.
This comes immediately after McCloskey claims that people buy books in order to display them on their coffee tables - something that you can't do with a Kindle version. Yet McCloskey now claims that hardback readers are more likely to be serious readers - utterly without evidence, of course.


9. p. 4:
I shall say some hard things, because they are true and important
This pretty much sums it up, folks.


So let me recap: All of these quotes came from the first three pages of a review that is 51 pages long. In three short pages, McCloskey manages to unfairly malign almost every branch of economics, make mutually contradictory assertions about how economists should use evidence, make false statements about physics that could have been corrected with a 5-second Google search, randomly insult a good popular physics book, and randomly insult Kindle readers, all in a mass of tangled, overwrought prose.

Yeah, there's no way I'm going to read 48 more pages of that. In fact, I'm not sure why I clicked on this link at all, given that everything else I've read of McCloskey's has been in the same vein (here's another example). Fool me twice, shame on me. Fool me five or six times, and I need a better hobby.

As a side note, John Cochrane agrees with my critique of the first 3 pages of McCloskey, and (more politely) notes several of the same errors. Yay!! He notes that McCloskey has written a writing guide, and failed to follow her own advice. (He also says that the review gets much better when it gets to the actual Piketty-related substance. So I suppose I'll put pages 4 through 51 on my "to read" list...possibly far down on the list...)

There is a clear lesson in all this: Do not believe things that Deirdre McCloskey says just because she says them. Google them. Find the facts. Do not nod your head in mute, placid agreement. Do not be seduced by the turgid prose style into thinking that here is an Authority.

Saturday, 13 June 2015

Store of value


Two interesting posts about bitcoin by JP Koning (post 1, post 2) got me thinking about the function of money. Usually we say that money serves three functions: unit of account, medium of exchange, and store of value. But what does it mean to be a "store of value"? More specifically, what does it mean for a form of money to be a "good store of value," i.e., performing this function well?

Suppose, for simplicity's sake, that an asset's value (defined in consumption terms) follows a geometric Brownian motion with constant percentage volatility and drift. So it satisfies:

 dS_t = \mu S_t\,dt + \sigma S_t\,dW_t

Does "good store of value" mean that sigma, the volatility, is low? Or does it mean that mu, the drift, is high? Remember that in the short term, volatility dominates drift, while in the long term, drift dominates. Also remember that there should be a tradeoff between these two - assets with higher volatility will tend to have higher systematic risk, and thus will tend to have higher expected returns (drift). In other words, in general an asset can be either a good long-term store of value, or a good short-term store of value, but not both.

Stocks are a good example of an asset with high positive drift and high volatility. Their value bounces around a lot, but it tends to increase over time. If "store of value" means "value tends to rise over time", then stocks would be a very good candidate. Stocks are a good long-term store of value.

Fiat money with a 2%-inflation-targeting central bank is a good example of an asset with negative drift and low volatility. Over time, you can expect this currency to lose value, since there will tend to be about 2% inflation every year. But the value is highly predictable - it doesn't fluctuate very much at all from day to day. Fiat-money-with-2%-inflation-targeting is a good short-term store of value.

Looking out at the world, I see a whole lot of countries that use fiat money, with something like inflation targeting, as their medium of exchange (i.e., what they use to pay for stuff). And I see zero who use stocks as the medium of exchange, even though the technology now exists for us to make payments in stock shares quite easily (it's just the same as exchanging dollars electronically, really).

So I conclude that we want the medium of exchange - i.e., money - to be a good short-term store of value (i.e., to have low volatility), and that we don't need it to be a good long-term store of value (i.e., we don't care about its expected return).

Why is this the case?

It makes sense if you think about the way that we use money. People don't know exactly when they are going to need to spend money, or how much. If they keep their wealth in assets with high expected returns and high volatility - stocks, etc. - they run the risk of having to sell in a down market in order to pay for unexpected expenses. So it makes sense to keep some of their wealth in a low-volatility, low-expected-return asset like fiat-money-with-2%-inflation-targeting, in the expectation that they'll probably have to use it to pay for something. The low expected return - the fact that cash falls in value a little bit every year - doesn't matter so much, because you don't keep the cash around that long before you spend it.

(Note that this ignores correlations, but those won't end up mattering here.)

So this is why money should be a short-term store of value rather than a long-term store of value. This is why, as David Andolfatto pointed out, gold makes such a lousy form of money.

How about bitcoin? If it keeps experiencing high volatility, then it's not going to become the medium of exchange in the U.S. or other countries with inflation targets. But if volatility falls in consumption terms - in other words, if the bitcoin prices of goods and services become very stable - then bitcoin will have a good chance of becoming the medium of exchange.

One problem, though, is that there's a bit of a chicken-and-egg situation here. The more merchants use bitcoin, the less volatile its consumption value will probably be. But in order for merchants to use it, customers have to use it, and they'll only start using it if there's low volatility.

But if bitcoin eventually manages to solve this chicken-and-egg problem, its promoters hope that it will be able to offer about the same volatility as fiat money but with a higher expected return. That would make bitcoin dominate fiat money, and would kick fiat money right out of the universe of investible assets - or, more realistically, it would force central banks to adopt an inflation target lower than the rate at which bitcoin is mined. That, I think, is the hope of bitcoin enthusiasts who say that bitcoin will "compete with central banks."

So for bitcoin to become money, it has to figure out how to massively reduce the volatility of bitcoin prices of goods and services.


Update

Eli Dourado has a good response. I think we agree on the volatility thing. I glossed over other kinds of transaction costs, which Koning addresses somewhat; on those matters, I'm pretty ignorant, so I will let Eli and JP work it out...

Tyler Cowen thinks Bitcoin's volatility is a bad sign for its chances of future adoption, because it reflects a consensus that Bitcoin will never really catch on. I disagree with Tyler. Suppose, for simplicity's sake, that milk was the only good that people consumed. And suppose that in the future, bitcoin becomes the universal medium of exchange, and that at that time the bitcoin price of milk is about the same as it is today. In this case, there is no benefit to buying a lot of bitcoin today, even if you know for certain that it's going to become universally adopted. Because the price of bitcoin is already "right", in consumption terms. Hoarding a bunch of bitcoin right now doesn't actually improve your tradeoff between future milk and present milk. So the lack of bitcoin speculation doesn't necessarily mean that people have decided that bitcoin is doomed. It could even mean the exact opposite.

Thursday, 11 June 2015

A paradigm shift in empirical economics?


Empirical economics is a more and more important part of economics, having taken over the majority of top-journal publishing from theory papers. But there are different flavors of empirical econ. There are good old reduced-form, "reg y x" correlation studies. There are structural vector autoregressions. There are lab experiments. There are structural estimation papers, which estimate the parameters of more complex models that they assume/hope describe the deep structure of the economy.

Then there are natural experiments. These papers try to find some variation in economic variables that is "natural", i.e. exogenous, and look at the effect this variation has on other variables that we're interested in. For example, suppose you wanted to know the benefits of food stamps. This would be hard to identify with a simple correlation, because all kinds of things might affect whether people actually get (or choose to take) food stamps in the first place. But then suppose you found a policy that awarded food stamps to anyone under 6 feet in height, and denied them to anyone over 6 feet. That distinction is pretty arbitrary, at least in the neighborhood of the 6-foot cutoff. So you could compare people who are just over 6 feet with people who are just under, and see whether the latter do better than the former. 

That's called a "regression discontinuity design," and it's one kind of natural experiment, or "quasi-experimental design." It's not as controlled as a lab experiment or field experiment (there could be other policies that also have a cutoff of 6 feet!), but it's much more controlled than anything else, and it's more ecologically valid than a lab experiment and cheaper and more ethically uncomplicated than a field experiment. There are two other methods typically called "quasi-experimental" - these are instrumental variables and difference-in-differences.

Recently, Joshua Angrist and Jorn-Steffen Pischke wrote a book called Mostly Harmless Econometrics in which they trumpet the rise of these methods. That follows a 2010 paper called "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics." In their preface, the authors write:
[T]here is no arguing with the fact that experimental and quasi-experimental research designs are increasingly at the heart of the most influential empirical studies in applied economics. 
This has drawn some fire from fans of structural econometrics, who don't like the implication that their own methods are not "harmless". In fact, Angrist and Pischke's preface makes it clear that they do think that "[s]ome of the more exotic [econometric methods] are needlessly complex and may even be harmful." 

But when they say their methods are becoming dominant, Angrist and Pischke have the facts right.Two new survey papers demonstrate this. First, there is "The Empirical Economist's Toolkit: From Models to Methods", by Matthew Panhans and John Singleton, which deals with applied microeconomics. Panhans and Singleton write:
While historians of economics have noted the transition toward empirical work in economics since the 1970s, less understood is the shift toward "quasi-experimental" methods in applied microeconomics. Angrist and Pischke (2010) trumpet the wide application of these methods as a "credibility revolution" in econometrics that has finally provided persuasive answers to a diverse set of questions. Particularly influential in the applied areas of labor, education, public, and health economics, the methods shape the knowledge produced by economists and the expertise they possess. First documenting their growth bibliometrically, this paper aims to illuminate the origins, content, and contexts of quasi-experimental research designs[.]
Here are two of the various graphs they show:



The second recent survey paper is "Natural Experiments in Macroeconomics", by Nicola Fuchs-Schuendeln and Tarek Alexander Hassan, It demonstrates how natural experiments can be used in macro. As you might expect, it's a lot harder to find good natural experiments in macro than in micro, but even there, the technique appears to be making some inroads.

So what does all this mean?

Mainly, I see it as part of the larger trend away from theory and toward empirics in the econ field as a whole. Structural econometrics takes theory very seriously; quasi-experimental econometrics often does not. Angrist and Pischke write:
A principle that guides our discussion is that the [quasi-experimental] estimators in common use almost always have a simple interpretation that is not heavily model-dependent.
It's possible to view structural econometrics as sort of a halfway house between the old, theory-based economics and the new, evidence-based economics. The new paradigm focuses on establishing whether A causes B, without worrying too much about why. (Of course, you can use quasi-experimental methods to test structural models, at least locally - most econ models involve a set of first-order conditions or other equations that can be linearized or otherwise approximated. But you don't have to do that.) Quasi-experimental methods don't get rid of theory; what they do is to let you identify real phenomena without necessarily knowing why they happen, and then go looking for theories to explain them, if such theories don't already exist.

I see this as potentially being a very important shift. The rise of quasi-experimental methods shows that the ground has fundamentally shifted in economics - so much that the whole notion of what "economics" means is undergoing a dramatic change. In the mid-20th century, economics changed from a literary to a mathematical discipline. Now it might be changing from a deductive, philosophical field to an inductive, scientific field. The intricacies of how we imagine the world must work are taking a backseat to the evidence about what is actually happening in the world.

The driver is information technology. This does for econ something similar to what the laboratory did for chemistry - it provides an endless source of data, and it allows (some) controls. 

Now, no paradigm gets things completely right, and no set of methods is always and universally the best. In a paper called "Tantalus on the Road to Asymptopia," reknowned skeptic (skepticonomist?) Ed Leamer cautions against careless, lazy application of quasi-experimental methods. And there are some things that quasi-experimental methods just can't do, such as evaluating counterfactuals far away from current conditions. The bolder the predictions you want to make, the more you need a theory of how the world actually works. (To make an analogy, it's useful to catalogue chemical reactions, but it's more generally useful to have a periodic table, a theory of ionic and covalent bonds, etc.)

But just because you want a good structural theory doesn't mean you can always produce one. In the mid-80s, Ed Prescott declared that theory was "ahead" of measurement. With the "credibility revolution" of quasi-experimental methods, measurement appears to have retaken the lead.


Update: I posted some follow-up thoughts on Twitter. Obviously there is a typo in the first tweet; "quasi-empirical" should have been "quasi-experimental".

Friday, 5 June 2015

Economic arguments as stalking horses


On Twitter, Russ Roberts said something I tend to agree with:
Just a curious coincidence that economists who like stimulus want bigger government and those who oppose it prefer smaller. 
In fact, he said something very similar back in 2011:
The evidence for the Keynesian worldview is very mixed. Most economists come down in favor or against it because of their prior ideological beliefs. Krugman is a Keynesian because he wants bigger government. I’m an anti-Keynesian because I want smaller government. Both of us can find evidence for our worldviews[.]
Now, Roberts doesn't actually know Krugman's motives - he's necessarily making a guess. But he definitely does know his own! Basically, he's saying that he adopts an anti-Keynesian stance not because be thinks stimulus actually fails to fight recessions, but because he wants to shrink the size of the government in the long term. 

Furthermore, he strongly implies that he will selectively display evidence against the effectiveness of stimulus as a stabilizer, in order to ward off the long-term expansion of government. That's motivated reasoning.

I have long believed that stuff like this happens in economics arguments all the time. It's probably not (usually) an intentional subterfuge, but more of an unconscious bias. An economist is presented with the proposition that countercyclical fiscal policy (stimulus in recessions, austerity in booms) increases overall efficiency. He is generally against increasing the size of the government. So when he sees that countercyclical fiscal policy will (temporarily) increase the size of the government in a boom, it triggers warning sirens in his subconscious. "What? Increase the size of government? Never!", says his subconscious. And so that motivates him to argue against the proposition that countercyclical policy is effective at stabilizing booms and recessions. 

So Russ Roberts is being honest and introspective, which is good. Introspection is difficult and often unflattering, so not enough people engage in it.

In fact, this is similar to the first part of Paul Romer's "mathiness" argument. Romer argues that some economists make their modeling choices for reasons related to academic politics - for example, he says that researchers who want to believe in a world without market power will construct growth theories that make silly assumptions just to avoid putting market power in the equation. 

This effect is a lot stronger if there's no skin in the game. As Matt Yglesias pointed out back in 2011, you see Republican politicians - who probably want to shrink the government - doing Keynesian policies fairly often. Bush enacted stimulus in 2008, and probably Reagan in the early 80s. 

Another example would be the fact that nearly everyone on Wall Street was eager to tell you back in 2012 that QE would cause a big rise in inflation. But when you looked at TIPS spreads, it was clear that the marginal investor wasn't putting his money where his mouth was.

This is consistent with the finding that partisan belief gaps go away when you pay people to get things right. A bet really is a tax on bullshit (although not the optimal tax). Or, as Nassim Taleb so memorably put it, macrobullshitting is reduced by having skin in the game. The cynical side of me says that Romer's "mathiness" manifests mainly in fields where the data is not good enough to exert discipline on theory.

So when you read econ arguments, always be a little wary of the motivations of the arguers.


P.S. - This is unrelated to the main point of my blog post, but an alternative, non-Keynesian theory of stimulus is that it boosts output because it expands the government, in the ways that need expanding (public investment).


Updates

Paul Krugman responds to Russ Roberts.

Adam Ozimek responds to Roberts, defending Paul Krugman, and the econ profession, from Roberts' cynical allegations. The two have an interesting (but brief) twitter debate about how evidence should interact with ideology.

Roberts writes an additional tweet that I like:
Conceding the reality of self-deception isn't cynical. It's realistic. Leads to humility and caution.
I agree, but just because some nonzero degree of self-deception is inevitable doesn't mean it's benign. Instead of just accepting it, I say people should try to fight against it. And if you realize that you yourself engage in a considerable degree of self-deception, I say you should focus on reducing it, rather than focusing on demonstrating that your rhetorical opponents are equally self-deceptive.

Tuesday, 2 June 2015

Fun with formality



The Paul Romer "mathiness" debate isn't about mathematical formalism in econ. Romer says:
T/F: Romer thinks that economists should not try to use the mathematics of Debreu/Bourbaki and should instead use math in the less formal way that physicists and engineers use it... 
[False.] 
[H]and-waving and verbal evasion...is the exact opposite of the precision in reasoning and communication exemplified by Debreu/Bourbaki, and I’m for precision and clarity.
But that comment got me thinking about formalism in econ math, and I thought I'd share some thoughts.

I've never actually read any Bourbaki papers, but Bourbaki was a club of mostly French mathematicians who got together in the 1930s and insisted that mathematicians be very formal. They got their wish, and the result is the rigorous, formalistic style of modern math papers. But physicists and engineers never followed this convention, preferring to derive essential results and let mathematicians pick up after them by putting in the formality.

There are economists who follow both conventions. You see some papers that use a very formal, terse, axiom-theorem-proof style similar to what you'd see in a mathematics journal. And you see some papers that use a more informal, "here's an equation that I think describes something" methodology that you might see in an engineering journal. 

An example of the formal style would be "The Simple Theory of Temptation and Self-Control," by Farak Gul and Wolfgang Pesendorfer. This paper introduces a wrinkle to the standard classical theory of intertemporal consumer decision-making - they allow people to have preferences over the sets of choices they are given, such as when people on a diet might not want to see the dessert menu. This wrinkle is inserted in the form of a new "axiom", called the Axiom of Set Betweenness. The presentation in the paper tends to look like this:


An example of the informal style would be "Golden Eggs and Hyperbolic Discounting," by David Laibson. This paper also introduces a wrinkle to the classical theory of intertemporal consumer decision-making. The wrinkle is a new functional form for the consumer's discount function. The presentation in the paper tends to look like this:


In fact, both of these papers are motivated by some of the same empirical phenomena. But they go about it in very different ways. Gul and Pesendorfer introduce an entirely new framework, while Laibson tweaks a functional form. In other words, Gul and Pesendorfer rewrite all of the rules for how decision-making is thought to operate, while Laibson sticks in something that works. As a result, it's natural for Gul and Pesendorfer to use a very formal framework, since formal things are more general and can build a foundation for many other theorists to work with. Laibson doesn't have any need to be so formal.

I imagine that some people complain about the formalism in Gul and Pesendorfer because it's hard for them to read. But after you learn to speak that language, it's actually easy to read - in many ways, easier than English. Formal math language forces you to read like a computer, which means you don't miss anything, while English tempts you (heh) to gloss over important parts as you scan through paragraphs.

In general, I think formal math style is no worse or better than informal engineering style. It's just a matter of personal preference.

Another thing that might annoy people about Gul and Pesendorfer's formalism is the clunkiness of doing economics this way. Do we really want to have to re-axiomatize all of consumer decision-making every time we see people doing something weird? Isn't the overhead of formalism a big waste of time and effort?

Well, maybe. If we take the Laibson paper seriously, all we have to do is to introduce a hyperbolic discounting function whenever we suspect it might make a difference in a model. That's equivalent to just setting the parameters of the hyperbolic discounting function to approximate a classic, non-hyperbolic discount function whenever we don't think it's interesting. But if we take the Gul and Pesendorfer paper seriously, we might have to reformulate all our theories. It's just not clear when the Axiom of Set Betweenness might apply. An axiom is just a lot more general than a parametrization. It seems to me that that's what you can lose from formalism - a clear sense of when the new stuff might make a difference. 

But in the end, I bet that people use the Gul & Pesendorfer stuff in the exact same way they use the Laibson stuff - they apply it when they think it might make a difference, and forget about it at all other times. So formalism vs. informalism again just comes down to a matter of personal preference.

Monday, 1 June 2015

Feminist Mad Max is real, y'all




At the website Return of Kings, econ blogger Aaron Clarey reviles Mad Max: Fury Road as a trojan horse for feminist ideas:
This [movie] is the vehicle by which they are guaranteed to force a lecture on feminism down your throat. This is the Trojan Horse feminists and Hollywood leftists will use to (vainly) insist on the trope women are equal to men in all things, including physique, strength, and logic.
Clarey is talking about the fact that a number of the female characters in the movie - including Charlize Theron's female lead - are tough warrior types who spend a lot of time shooting and otherwise killing big tough male baddies. He thinks that's unrealistic - in the real world, he seems to be saying, war is a man's job.

But actually, I can think of at least one good real-life analogue of the badass women of Mad Max (and of much of modern pop culture). It's the war in Syria and Iraq. The Kurdish militias who have been beating the crap out of ISIS in the north of Syria have substantial numbers of women in their ranks. Here, via War Nerd, are pictures of a couple of the women killed in combat with ISIS:

10845871_398570233601534_5371485097458373477_o

Normally, women are kept in noncombatant roles in Kurdish militias. But the pressure of the ISIS assault forced women to join the fight directly, and they have apparently been quite effective in battles like the one in Kobane. In fact, a woman is the commander of the Kurdish militias in Syria:
Meet Nassrin Abdallah. With her diminutive height and broad smile, it doesn't seem like she should strike fear into the hearts of hardened Islamic State jihadists. But this 36-year-old Syrian Kurd woman has been at the tip of the spear of the Kurdish forces that last month liberated the symbolic city of Kobane from IS militants... 
As the head of the armed wing of the Kurdish PYD, "commander" Nassrin has led both men and women into battle against Islamic State fighters who have overrun large areas of Iraq and Syria... 
According to Nassrin, around 40 percent of the Kurdish fighters battling over the town on the Syrian-Turkish border were women.. 
Some, like her, are hardened warriors but also joining their ranks were mothers who sent their children over the border to the safety of Turkey, then rushed off to join their sisters in arms...Fighting alongside Nassrin are other powerful female commanders who have achieved legendary status on the battlefield. 
Women like Narine Afrin, who played a key role in the defence of Kobane. Or Arin Mirkan, who blew herself up on October 5, killing dozens of IS fighters encircling the town, according to Kurdish sources. 
In total, there are 4,000 women fighting in the armed wing of the PYD [militia], say Kurdish officials, who refuse for strategic reasons to disclose the total number of people who have taken up arms. 
Over and beyond the military aspect of the victory over IS in Kobane, it has been seen as a triumph for women, who are repressed in areas under IS control, obliged to wear the veil and, in the case of the Yazidi minority, forced into slavery.
In fact, this pretty closely parallels the plot of Mad Max: Fury Road! Nassrin Abdallah is the real-life Imperator Furiosa, while ISIS is the real-life version of Immortan Joe and the Warboys.

And it's important to note that the women of the Kurdish militias haven't just been fighting, they've been winning. ISIS massively outnumbered and outgunned the Kurdish militias in a number of battles in northern Syria, but were soundly defeated.

So if men's natural physical advantages are not decisive (at least in the age of guns and explosives), why have most armies throughout history been mostly or exclusively male?

One reason is that men can't bear children. Over time, a warlike society's success depends on the number of soldiers it can throw at the enemy. If a male soldier gets killed, the loss of his sperm will not adversely impact the overall fertility of the tribe. But if a female soldier gets killed, the fertility of the tribe will go down, reducing the number of future soldiers. You really need to think of things in terms of expected discounted total soldiers. The math of protracted warfare favors sending men to die on the front lines, and keeping women in the rear to pump out new soldiers. (Yes, it sucks to live in a warlike society.)

Another reason is preference. Men, on average, are far more violent and aggressive than women. This means that more men will want to go to war, or at least hate it less.

So Aaron Clarey is wrong. Mad Max: Fury Road is not a piece of unrealistic feminist propaganda (though the Tumblr site Feminist Max Max is funny). What it actually is is a movie about - to use a Clarey phrase - "one man with principles, standing against many with none."