Labor Market Monopsony- “Widespread”?

Azar, Marinescu, and Steinbaum recently released a working paper through the Roosevelt Institute on Labor Market Monopsony.  In their words:

In short, we find that most labor markets (as defined by occupation and geography) are very concentrated, and that that concentration has a robust negative impact on posted wages for job openings. These findings are drawn from only one (large) dataset, so they are not the last word on the subject. The implication, along with the other papers cited above, is that the antitrust authorities should not operate under the assumption that labor markets are “naturally” competitive.

Their posted map of the concentration indices for each region is certainly striking:

widespread-labor-monopsony1 (1)

It seems that the entire nation is blanketed in a sea of exploitative employers!  Until, that is, we compare this map to a population density map of the United States…


…in which “hotspots” of population are also “hotspots” of labor market competitiveness.  Colorado and Minnesota are especially striking examples of this.  This is especially relevant since the Census Bureau estimates that 80% of the US population lives in these urbanized areas, which are the same areas where Azar, Marinescu, and Steinbaum found pockets of competitive labor markets:
US Urbanized Areas (80% of population) with AMS’ Labor Market Concentration Map

So, yes, labor market monopsony is widespread, except where there are people.

The Paradox of Surplus

Ben Schwartz’s Paradox of Choice (2009) coined the term “choice overload”, or the phenomenon by which “an increase in the number of options to choose from may lead to adverse consequences such as a decrease in the motivation to choose or the satisfaction with the finally chosen option” (Scheibehenne et. al (2010), p. 409).  While empirical evidence of choice overload has been mixed, and it seems odd that businesses that continue to expand the choices they offer continue to thrive in the real world, the idea seems like such a fundamental challenge to the foundations of economics that we shouldn’t ignore it.  However, one of the fundamental ideas behind choice overload: that increasing the number of options decreases the consumers satisfaction with the chosen option, is actually completely compatible with one of the basic tools of economic analysis.

Scheibehenne et. al describe one influential experiment that seems to point to choice overload:

In another study, Iyengar and Lepper (2000) offered participants a choice between an array of either six or 30 exotic chocolates. Participants who chose from the 30 options experienced the choice as more enjoyable but also as more difficult and frustrating. Most intriguingly, though, participants facing the large assortment reported less satisfaction with the chocolates they finally chose than those selecting from the small assortment. (p. 410, emphasis mine)

While satisfaction could have many definitions, if participants in the experiments were reporting their consumer surplus as their satisfaction, this isn’t necessarily surprising.

We can test this idea using this simple probability simulation.  We assume that a consumer’s value of all goods is uniformly distributed on the interval (0,100).  In the first sheet, we randomly present the consumer with five goods, and note the consumers highest and second-highest valued goods.  We then subtract the two to find the consumers surplus.  Running the experiment 30 times, we see that the average consumer surplus after being offered a choice of 1 out of 5 goods is 16.67 (σ=11.2).  When we increase the size of the choice set to 11 choices, the average surplus shrinks to 11.8 (σ=10.19).  Increasing the size again to 45, average surplus shrinks yet again to 2.17 (σ=2.05).  It’s not only possible, but highly likely that increasing the number of choices available to a consumer will reduce their total surplus.

Transfers ≠ Gains


Noah Smith at Bloomburg raises a question while bemoaning the lack of emphasis on ethics and morality in most undergraduate econ courses:

Suppose economists find that a $15 minimum wage raises the incomes of 99 percent of low-paid workers, but throws the other 1 percent out of jobs and onto the welfare rolls. Is it worth it? Should the government obey the principle used by doctors — first, do no harm — and avoid any policy that hurts anyone in any way? Or should the government take responsibility for any outcome, based on the idea that government sets up markets in the first place?

What was notable about Dr. Smith’s question was that he doesn’t even pay lip service to the main tool that economists use to assess situations like this: economic efficiency.  He seems to be alluding to the idea that economists face a choice between evaluating programs using Pareto or Kaldor-Hicks criteria, but almost all economists have converged on using Kaldor-Hicks, or cost benefit analysis in assessing the desirability of various policy outcomes.

Secondly, and more importantly, the cost-benefit analysis that Dr. Smith proposes is incomplete!  A full cost-benefit analysis of a minimum wage policy should include the costs and benefits to employers and consumers as well.  Under most models of the low-wage labor market, the gains to low-skilled workers who keep their jobs after a minimum wage hike have to come from somewhere, like higher prices for consumers or lower profits for employers.  That would make the higher incomes for low-skilled workers a transfer, not a gain, and thus even the one percent job loss is enough deadweight loss to kill the Kaldor-Hicks efficiency of the policy.  Other assumptions like a purely monopsonistic labor market or an upward-sloping demand curve could yield different results, but the economists’ toolbox could approach these situations just as easily.

It’s one thing to criticize the implicit consequentialist assumptions in the field,  like to say that the cost/benefit analysis we used ignores marginal utility of income or concerns for equity, but it’s another to ignore the evaluation tools it has entirely.