Friday, December 15, 2017

Occupational Licensing Under Fire

Let's be clear: an "occupational license" means that if you don't have the license, you don't have government permission to do the job. If you do the job without such a license, you can be legally punished, even put in jail.

There are a number of alternative of methods that seek to reassure buyers about quality that don't involve requiring an occupational licensing. For example, many occupations have a certification exam (sometimes done by the government, sometimes by third parties), but anyone who wishes to hire an uncertified worker is free to do so. Providers can become bonded or insured voluntarily, or states can impose requirements for bonding or insurance. States can require providers to register with a legal name and address. States can provide inspections, as is done with restaurants, for example, and on many building projects. Thus, occupational licensing is not just a method of seeking to ensure quality, but an especially restrictive method. The Institute for Justice digs into these issues in the second edition of its report "License to Work: A National Study of Burdens from Occupational Licensing," written by Dick M. Carpenter II, Lisa Knepper, Kyle Sweetland and Jennifer McDonald (November 2017). They write (footnotes omitted):
"The share of American workers needing a license to work has climbed steadily in recent decades, from 1 in  20 workers in the 1950s to roughly 1 in 4 today ... Research suggests this growth is not primarily due to more workers leaving the farm and the factory for traditionally licensed fields like medicine and law. Instead, the main driver is new laws expanding licensing into previously unlicensed occupations." 
The IJ report looks at occupational licensing rules across states in a sample if 102 lower- and middle-wage occupations; that is, it doesn't look at occupational licensing in higher wage jobs like doctors, lawyers, teachers. Of these jobs that require occupational licenses, "[s]ome, such as family child care home operator, public school preschool teacher and non-instructional teacher assistant, cater to the needs of children. Others, like dental assistant, dietetic technician, optician and psychiatric worker, come from the health care sector. Still others represent the service sector and the construction and transportation trades. These include barber, bartender, cosmetologist, massage therapist, manicurist and skin care specialist; various contractor designations; and bus, taxi and truck driver. ... The list of 102 occupations includes some that are commonly licensed—and commonly recognized as such—including barber and cosmetologist, two ubiquitously and long-licensed occupations. Also on the list are many occupations that are generally familiar to the public, though the fact that they are licensed may not be. Such occupations include florist, funeral attendant, home entertainment installer, locksmith and upholsterer. Finally, there are some occupations on the list that are, along with their licenses, highly obscure: milk sampler, conveyor operator and dairy equipment still machine setter, for example."

When you start looking at the state occupational licensing laws across these kinds of occupations, all sorts of inconsistency and strangeness becomes apparent.

For example, Louisiana requires 500 hours of training get an occupational license to braid hair, and in 2012 had 32 hair-braiders in the state. Neighboring Mississippi has zero hours of required training, although it does require hair-braiders to register their business with the state, and had over 1,200 hair-braiders.

As another example, Maryland doesn't license auctioneers, but the city of Baltimore does. "The city of Baltimore requires licenses or registrations for at least 26 occupations in addition to the 59 low- and middle-income occupations licensed by the state of Maryland. For example, Maryland is one of the 21 states that do not license auctioneers, but auctioneers in Baltimore must get a license from the city to work. And that license is relatively onerous, requiring $1,600 in licensing fees and either a one-year apprenticeship or an expensive training course."

"EMTs hold lives in their hands, yet 73 other occupations have greater average licensure burdens. This includes barbers and cosmetologists, home entertainment installers, interior designers, log scalers, manicurists and numerous contractor designations. For perspective, while the average cosmetologist must complete 386 days of training, the average EMT must complete a mere 34. Even the average tree trimmer must complete more than 16 times the amount of education and experience as the average EMT."

Locksmiths require a state license in 14 states. Opticians require a state license in 22 states. It seems unlikely that locks or eyes differ substantially across states. Thus, defenders of such licenses need to justify why they don't seem necessary in other states. ""On average, the 102 occupations studied here are licensed by just 27 states. Only 23 occupations are licensed by 40 states or more. Such inconsistency is suspect. The vast majority of these occupations are practiced in at least one state—and typically many more than one—without need of permission from the state and evidently without widespread harm."

The common argument for occupational licenses is that they are needed for purposes of quality or safety. But for the kinds of jobs discussed in the IJ study, evidence doesn't back up that claim, "Studies of licensing and service quality have examined a wide range of occupations, including florists, tour guides, hair braiders and cosmetologists, without finding positive effects. Even research
on occupations where health risks may be more pronounced, such as dental hygienists, nurse practitioners and opticians, has found that licensing restrictions raise the cost of services without
improving quality.Put differently, research suggests that consumers are paying more without getting better results."

Several recent academic studies have offered additional evidence. For example, the Federal Reserve Bank of Minneapolis recent published "Is Occupational Licensing a Barrier to Interstate Migration?" by Janna E. Johnson and Morris M. Kleiner ((Staff Report 561, December 6, 2017). From the abstract:
"We analyze the interstate migration of 22 licensed occupations. Using an empirical strategy that controls for unobservable characteristics that drive long-distance moves, we find that the between-state migration rate for individuals in occupations with state-specific licensing exam requirements is 36 percent lower relative to members of other occupations. Members of licensed occupations with national licensing exams show no evidence of limited interstate migration. The size of this effect varies across occupations and appears to be tied to the state specificity of licensing requirements. We also provide evidence that the adoption of reciprocity agreements, which lower re-licensure costs, increases the interstate migration rate of lawyers. Based on our results, we estimate that the rise in occupational licensing can explain part of the documented decline in interstate migration and job transitions in the United States."
In another recent study, Brandon Pizzola and Alexander Tabarrok discuss "The Undertaker's License"
(Cato Institute Research Briefs in Economic Policy #91, December 2017). They focus on funeral directors in Colorado. It turns out that Colorado used to license funeral directors up until 1983, but then repealed the licensing rule. Thus, they can study Colorado's experience as a sort of natural experiment. They find that up to 1983, wages for funeral directors in Colorado were similar to the rest of the country, but by about 1990, they were 11% below the national average--which is roughly where they have remained since. Moreover, other prices associated with funerals also fell in Colorado, which is consistent a belief that as the funeral business became more competitive, there were ways to hold down the other costs as well. There is no evidence that funerals are somehow worse or of lower quality in Colorado than in other states. the underlying economic research paper here is Brandon Pizzola and Alexander Tabarrok, “Occupational Licensing Causes a Wage Premium: Evidence from a Natural Experiment in Colorado’s Funeral Services Industry,” International Review of Law and Economics 50 (2017): 50–59.

The Institute for Justice and the Cato Institute are known for taking takes libertarian positions. Thus, in these partisan times, it seems worth noting that reform of occupational licensing is an issue with some bipartisan support.

For example, the Obama White House published a report in July 2015 called "Occupational Licensing: A Framework for Policymakers."  Here's a sample of the tone:
"When designed and implemented carefully, licensing can offer important health and safety protections to consumers, as well as benefits to workers. However, the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing." 
The Obama administration even provided a few million dollars of funding for a coalition of states to compare and reconsider their occupational licencing rules. In the highly Democratic state of California, an oversight agency called the Little Hoover Commission argued that California should be part of this process in its October 2016 report: "Jobs for Californians: Strategies toEase Occupational Licensing Barriers." The tone of this report is similar as well:
"One out of every five Californians must receive permission from the government to work. For millions of Californians, that means contending with the hurdles of becoming licensed. Sixty years ago the number needing licenses nationally was one in 20. What has changed? What once was a tool for consumer protection, particularly in the healing arts professions, is now a vehicle to promote a multitude of other goals. These include professionalism of occupations, standardization of services, a guarantee of quality and a means of limiting competition among practitioners, among others. Many of these goals, though usually well intentioned, have had a larger impact of preventing Californians from working, particularly harder-to-employ groups such as former offenders and those trained or educated outside of California, including veterans, military spouses and foreign-trained workers.

"In its study on occupational licensing, the Commission sought to learn whether the state properly balances consumer protection with ensuring that Californians have adequate access to jobs and services. It learned the state is not always maintaining this balance, as evidenced by discrepancies in requirements for jobs that pose similar risks to the consumer. Manicurists, for example, must complete at least 400 hours of education, which can cost thousands of dollars, and take a written and practical exam before becoming licensed. ... When government limits the supply of providers, the cost of services goes up. Those with limited means have a harder time accessing those services. Consequently, occupational licensing hurts those at the bottom of the economic ladder twice: first by imposing significant costs on them should they try to enter a licensed occupation and second by pricing the services provided by licensed professionals out of reach. The Commission found that over time, California has enacted a thicket of occupational regulation that desperately needs untangling in order to ease barriers to entering occupations and ensure services are available to consumers of all income levels."
In the Trump administration, the Federal Trade Commission has been  holding a series of roundtable conference to discuss the effects of occupational licensing, too. Often, the political wedge for dealing with occupational licensing issues seems to involve the plight of spouses in military families, who find that when they are transferred to a different state, they are unable to do their previous job without passing some additional costly and time-consuming occupational licensing test. But this issue is a lot broader than military families. With occupational licensing now covering one-fourth of all US jobs, it touches on the opportunities for jobs and upward mobility available to a very wide array of workers. When it seems important to take steps to assure quality of work, lots of other options are available, and occupational licensing should be used much more narrowly than it is. 

Thursday, December 14, 2017

Ricardo's Comparative Advantage After Two Centuries

Two centuries ago in 1817, the great economist David Ricardo published his most prominent work: "On the Principles of Political Economy and Taxation." Among many other insights, it's the book that introduced the idea of "comparative advantage" (especially in Chapter 7) and thus offered a way of thinking about the potential for gains from trade--both between countries and within areas of a single country--that has been central to economic thinking on these topics ever since. In Cloth for Wine? The Relevance of Ricardo’s Comparative Advantage in the 21st Century, Simon Evenett has edited a collection of 15 short essays thinking through how and when comparative advantage applies to modern economies. The book is published by the Center for Economic Policy Research (CEPR) Press, in association with the UK government Department for International Trade.

Most people have no difficulty with the idea that two countries can at least potentially benefit from trade if each one has a productivity advantage in a certain good. There are places in the Middle East where finding oil doesn't seem to involve a lot more than jamming a sharp stick into the ground. Those places should produce and export oil. The United States has vast areas of fertile soil. Those places should produce and export corn and wheat.

But an immediate issue arises. What about areas that don't seem to have a productivity advantage in any area? How can they possibly benefit from trade? Ricardo's theory establishes the point that the key factor in what areas or nations will choose to export or import is not whether there is an overall productivity advantage, but instead where that productivity advantage is greatest--or where the productivity disadvantage is smallest. It is the "comparative" advantage that matters.

In my own Principles of Economics textbook (which of course I recommend for quality and value), I offer a homely example to build some intuition for this idea, involving whether it is useful for a group of campers to specialize in certain tasks. I wrote:
"[C]onsider the situation of a group of friends who decide to go camping together. The friends have a wide range of skills and experiences, but one person in particular, Jethro, has done lots of camping before and is a great athlete, too. Jethro has an absolute advantage in all aspects of camping: carrying more weight in a backpack, gathering firewood, paddling a canoe, setting up tents, making a meal, and washing up. So here’s the question: Because Jethro has an absolute productivity advantage in everything, should he do all the work?
"Of course not. Even if Jethro is willing to work like a mule while everyone else sits around, he still has only 24 hours in a day. If everyone sits around and waits for Jethro to do everything, not only will Jethro be an unhappy camper, but there won’t be much output for his group of six friends to consume. The theory of comparative advantage suggests that everyone will benefit if they figure out their areas of comparative advantage; that is, the area of camping where their productivity disadvantage is least, compared to Jethro. For example, perhaps Jethro is 80% faster at building fires and cooking meals than anyone else, but only 20% faster at gathering firewood and 10% faster at setting up tents. In that case, Jethro should focus on building fires and making meals, and others should attend to the other tasks, each according to where their productivity disadvantage is smallest. If the campers coordinate their efforts according to comparative advantage, they can all gain."
This way of phrasing the situation clarifies the essential economic issue: not who is most productive at various tasks, but how to allocate all of the available productive power across a range of tasks in the most efficient way. In that problem, everyone has a role to play. Even a party with productivity advantages in every area will have areas where their advantage is smallest; conversely, a party who is least productive at every single task will have an area in which the productivity disadvantage is least. Focusing on those areas will provide gains from trade.

Of course, the camping example is just conceptual way of framing how division or labor and trade among friends can potentially provide gains. It leaves out many real world complications, which are the focus of many of the essays in this book. How large are the gains from trade? How will the gains be distributed across the parties involved in the trade? Does trade provide additional gains over time through heightened competition and incentives for innovation? How will trade affect the distribution of income? What are the underlying reasons why countries differ in their profiles of productivity across activities, and to what extent can those reasons be altered by public policy? What happens when comparative productivity levels shift, so some industries no longer need the same number of workers?  Do the potential gains from trade in goods also apply to gains in services? Do the potential gains apply to a global economy with "value chains" of production that cross and re-cross national borders? How do economies of scale fit into the picture? What about trade in similar-but-not-identical branded products, like cars? What is the appropriate reaction when countries erect barriers to trade or when there are persistent patterns of trade surpluses and deficits?

Ricardo actually had thoughts and analysis about a surprisingly large number of these questions, and the essays in this book take up most of the rest of them. Here, I just want to note a few points that seemed worth particular emphasis.

One is that although Ricardo's theory of comparative advantage never disappeared, and has been a mainstay of basic principles of economics for 200 years, there was a period of some decades when it seemed less relevant to the facts of international trade. As Jonathan Eaton explores in his contribution to this volume, Ricardo's basic example of comparative advantage involved one factor of production (labor) and different technology across countries linked to differences in productivity of labor. By the middle of the 20th century, the focus was on models that had a number of different factors of production, and thus chose different methods of production, although they shared access to the same technology. By the 1980s, emphasis had shifted to models of how large firms would trade similar but not identical goods across countries: for example, international trade in cars or airplanes or machine tools.

But perhaps surprisingly, as economists looked at data on international trade with many different products, and explored models where countries differed in technology and productivity, they were led back to a Ricardian framework. Eaton and his frequent coauthor Samuel Kortum were leaders in this modelling. In an essay discussing this approach in the Spring 2012 issue of the Journal of Economic Perspectives, they wrote in the abstract:
"David Ricardo (1817) provided a mathematical example showing that countries could gain from trade by exploiting innate differences in their ability to make different goods. In the basic Ricardian example, two countries do better by specializing in different goods and exchanging them for each other, even when one country is better at making both. This example typically gets presented in the first or second chapter of a text on international trade, and sometimes appears even in a principles text. But having served its pedagogical purpose, the model is rarely heard from again. The Ricardian model became something like a family heirloom, brought down from the attic to show a new generation of students, and then put back. Nearly two centuries later, however, the Ricardian framework has experienced a revival. Much work in international trade during the last decade has returned to the assumption that countries gain from trade because they have access to different technologies. These technologies may be generally available to producers in a country, as in the Ricardian model of trade, our topic here, or exclusive to individual firms. This line of thought has brought Ricardo's theory of comparative advantage back to center stage."
In short, when it comes to the modern analysis of international trade, Ricardo is back! Of course, this isn't the only approach or only set of questions. Indeed, one of the problems in thinking about the effects of international trade is that the patterns of international trade are deeply interwoven with other political, historical and social variables, so extrapolations are hard. For example, it would probably be unwise to believe that if the nations of Africa or Latin America or Asia sought to form a "Union," it would work out in the same ways (for better or worse) as the European Union. The laws about international trade are not the only relevant differences across regions.

Indeed, there is a long-standing argument in economics over whether trade leads to economic growth, or whether economic growth leads to more trade, or whether other external factors (like improved technology and transportation) affect both.

One other essay in this volume that especially caught my eye is by Ernesto Zedillo, and his title reveals his theme "Don’t blame Ricardo – take responsibility for domestic political choices." He writes:
"In the case of politicians opposed to international trade, the arguments put forward vary a lot, from the subtle to the grotesque, but all have in common the deflection of responsibility for domestic policy failures to external forces as the cause of those failures. The most extreme case of such deflection is to be found in the rhetoric of populist politicians, from both the left and the right. More than any other kind, the populist politicians have a marked tendency to blame others for their countries’ problems and failings. Foreigners who invest in, export or migrate to their country are the populist’s favourite targets to explain almost every domestic problem. That is why restrictions – including draconian ones – on trade, investment and migration are an essential part of the populist’s policy arsenal. Populists praise isolationism and avoid international engagement, except with their foreign populist cronies. The ‘full package’ of populism frequently includes anti-market economics, xenophobic and autarkic nationalism, and authoritarian politics. Populists display their protectionism and xenophobia as proof of their ‘authentic patriotism’ and excel at manipulating the public’s nationalistic sentiments to execute their retrograde economic and political agenda, which invariably includes a strong rejection of open markets.
"Unfortunately, asserting a causal relationship between globalisation and domestic ills is the rule rather than the exception even in countries governed by moderate democratic leaders, left or right. It is a rare event that a government confronting serious domestic problems would look first into its own policy failings rather than external causes in dealing with their citizens’ demands for effective solutions. Blaming imports, foreign capital volatility and migrants would seem always preferable to explain phenomena such as slow GDP growth, external disequilibria, stagnant wages, and high unemployment. Taking responsibility for domestic policies – or the lack of thereof – that may be at the root of such problems, even if the latter is flagrantly the case, would seldom happen without first trying to point to external factors as the culprits for the unwanted conditions." 
To put this point in a US context, think of issues like the extraordinarily high costs of the US health care system,  the disappointing performance of K-12 education, the low levels of investment in infrastructure, stagnant spending on research and development as a share of GDP, the looming problem of rising spending on government entitlement programs, problems with the individual and corporate tax code, concerns about the competitiveness of certain sectors of the economy, the appropriate level financial regulation, and the challenges of adapting to changes in robotics, artificial intelligence, and other technological changes. These issues (and others that could be added) make a tall pile of problems; in contract, the contribution of international trade to the US economic issues is pretty small. But it's always a lot easier to criticize the neighbors than to clean up the mess in your own front yard.

One of the stories that economists tell each other about the idea of comparative advantage (mentioned in a couple of these essays is from 1969 Presidential Address by Paul Samuelson, The Way of an Economist," published in International Economic Relations: Proceedings of the Third Congress of the International Economic Association Held at Montreal (and available via the magic of Google Books, quotation is from p. 9):
"[O]ur subject puts its best foot forward when it speaks out on international trade. This was brought home to me years ago when I was at the Society of Fellows at Harvard along with the mathemetician Stanley Ulam. Ulam, who was to become the originator of the Monte Carlo method and a co-discoverer of the hydrogen bomb, was already at a tender age a world-famous topologist. And he was a delightful conversationalist, wandering lazily over all domains of knowledge. He used to tease me by saying, `Name me one proposition in the social sciences which is both true and non-trivial.' This was a test that I always failed. But now, some thirty years later, on the staircase so to speak,  an appropriate answer occurs to me: The Ricardian theory of comparative advantage; the demonstration that trade is mutually profitable even when one country is absolutely more -- or less -- productive in terms of every commodity. That it is logically true need not be argued before a mathematician; that it is not trivial is attested by the thousands of important and intelligent men who have never been able to grasp the doctrine for themselves or to believe it after it was explained to them." 
It is of course a little disheartening to me that Paul Samuelson, one of the greatest economists of the 20th century, had difficulty coming up with an economic idea that was both true and nontrivial! But it does make a better story that way. I sometimes say to students that understanding the idea of comparative advantage--both its strengths and its limitations--is one of the dividing lines separating those who actually know some economics from those who don't.

Wednesday, December 13, 2017

Adding Monetary Costs of Lost Lives to the Opioid Crisis

Sometimes if you can justify putting a bigger dollar sign in front of a problem, then you can also justify giving it more attention. This is a useful function of "The Underestimated Cost of theOpioid Crisis," a report recently published by the Council of Economic Advisers (November 2017).  As background, here's a figure showing the rise in opioid-related deaths--more than doubling in the last decade.

Total deaths from opioid overdoses have climbed very close to the number of deaths from motor vehicle accidents, which totaled over 37,000 in 2016. Here's the age distribution of the opioid overdose deaths. They are not concentrated among elderly Americans, but rather in the 25-55 age bracket.
As the CEA report points out, the most prominent recent study of this topic in 2016 calculates the costs of opioid problems by measuring the costs of fatalities in terms of lost potential earnings. Thus, the biggest change in the CEA report is to put a monetary value on the deaths. The report notes:
"Among the most recent (and largest) estimates was that produced by Florence et al. (2016), who estimated that prescription opioid overdose, abuse, and dependence in the United States in 2013 cost $78.5 billion. The authors found that 73 percent of this cost was attributed to nonfatal consequences, including healthcare spending, criminal justice costs and lost productivity due to addiction and incarceration. The remaining 27 percent was attributed to fatality costs consisting almost entirely of lost potential earnings. ... Using conventional estimates of the losses induced by fatality routinely used by Federal agencies, in addition to making other adjustments related to illicit opioids, more recent data, and underreporting of opioids in drug overdose death certificates, CEA finds that the overall loss imposed by the crisis is several times larger than previous estimates."
Putting a monetary value on the loss of life is of course a vexed business. For previous discussions on this website of the "value of a statistical life," see "Value of a Statistical Life? $9.1 Million" (October 22, 2013) and "The Origins of the Value of a Statistical Life Concept" (November 25, 2014). The CEA report gives a quick overview of the academic literature on this point, and also on what numbers are actually used by government agencies:

Three Federal agencies have issued formal guidance on the VSL [value of a statistical life] to inform their rule-making and regulatory decision-making. The U.S. Department of Transportation’s (DOT) guidance (U.S. DOT 2016) suggests using a value of $9.6 million (in 2015 dollars) for each expected fatality reduction, with sensitivity analysis conducted at alternative values of $5.4 million and $13.4 million. According to a recent white paper prepared by the U.S. Environmental Protection Agency’s (EPA) Office of Policy for review by the EPA’s Science Advisory Board (U.S. EPA 2016), the EPA’s current guidance calls for using a VSL estimate of $10.1 million (in 2015 dollars), updated from earlier estimates based on inflation, income growth, and assumed income elasticities. Guidance from the U.S. Department of Health and Human Services (HHS) suggests using the range of estimates from Robinson and Hammitt (2016) referenced earlier, ranging from a low of $4.4 million to a high of $14.3 million with a central value of $9.4 million (in 2015 dollars). The central estimates used by these three agencies, DOT, EPA, and HHS, range from a low of $9.4 million (HHS) to a high of $10.1 million (EPA) (in 2015 dollars).
Putting a monetary value on lives lost raises other issues, too, like whether the same value is appropriate regardless of whether the lives lost are young, middle-aged, or elderly. Rather than trying to resolve these issues, a common approach is to offer a range of possible options. In this report, the preferred estimate is that total costs of the opioid epidemic in 20115 were $504 million, of which $72.3 million is the kinds of costs from the earlier 2016 study, and the costs of lost lives are $431.7 billion. As the report explains: 
"There are several reasons why the CEA estimate is much larger than those found in the prior literature. First, and most importantly, we fully account for the value of lives lost based on conventional methods used routinely by Federal agencies in cost-benefit analysis for health related interventions. Second, the crisis has worsened, especially in terms of overdose deaths which have doubled in the past ten years. Third, while previous studies have focused exclusively on prescription opioids, we consider illicit opioids including heroin as well. Fourth, we adjust overdose deaths upward based on recent research finding significant underreporting of opioid-involved overdose deaths."
I do not claim to be well-versed in this literature, but it seems to me as if lots of people are deploring the opioid epidemic, but not many changes are in the works that would plausibly bring a large reduction in these costs.

Tuesday, December 12, 2017

Snapshots of the US Housing Market: Ten Years Later

Ten years ago, December 2007, was the start of the Great Recession. Have US housing markets recovered? My go-to source for regular updates on the US housing market is "Housing Markets at a Glance," a monthly chartbook published by the Housing Finance Policy Center at the Urban Institute. Here are some snapshots from the most recent (November 2017) issue.

The total value of US housing can be broken down into homeowners' equity and the mortgage debt still outstanding. As this figure shows, during the fall in housing prices from 2006 to 2011, the total value of US housing fell by about $7 trillion--a fall of roughly 30%. Of course, the fall in housing prices didn't reduce the debt that people already owed (the blue line), so it mainly shows up in home equity (the yellow line), which falls by about 50%. Home equity is usually larger than outstanding debt, but that relationship reversed itself for a few years. However, the total value of US housing has now risen again and exceeds its level in 2006, while the amount of housing debt has actually declined a bit.
Here's a figure showing the corresponding annual change in home prices, using two housing price index (HPI) measures, one from CoreLogic and one from Zillow.

Unsurprisingly, the sharp decline in home mortgages led to severe stresses for households. This figure shows the share of home loans in serious delinquency or actual foreclosure. At its worst, about one-tenth of all mortgage loans in the entire US were more than 90 days delinquent or in foreclosure.

A much larger number of households were not delinquent on their mortgage, but found themselves "underwater"--that is, what they owed on the mortgage was more than the house would have been worth in a sale. In 2009, about one-fourth of all US homes with a mortgage had negative equity.

The economic story behind these charts--the loss of value, price meltdown, delinquencies, and negative equity--is cataclysmic.  The charts also provide some evidence on what was happening behind the scenes in mortgage finance. To understand these figures, it's useful to know that most mortgages are now "securitized," meaning that they are financed by investors who purchase financial securities based on the underlying mortgage.  These investors can be banks, pension funds, insurance companies, hedge funds, or others. The process of securitization can happen through the "government-sponsored enterprises" of Fannie Mae, Freddie Mac, and Ginnie Mae, or they can happen through the private sector with "private-label" securities.

One big change the years just before the melt-down in housing prices was that the private-label securities expanded substantially, and in particular expanded into subprime and Alt-A mortgages, which are riskier than the usual "prime" mortgage. (Alt-A is a risk category in-between prime and subprime.)
This share of the housing market going to these private-label securities, which had been rising slowly in the late 1990s, spiked for few years. But in the heat of the housing crisis, they melted away. Now the government-sponsored firms almost totally dominate mortgage-backed securities. Of course, they survive because they received huge federal government bailouts, as well as a promise that the government would stand behind them in the future.
In the aftermath of the housing market meltdown, and the near-demise of private-label mortgage-backed securities, it's no surprise that it's become harder to get a mortgage loan. Indeed, one can make a case that the market is still in overreaction mode. This index offers a calculation of the share of owner-occupied home purchase loans that are likely to default. It separates out the risk that it is due to borrowers not repaying, and the product-risk that is due to higher-risk loans being made.

Thus, you can see the arrival of the larger share of subprime and Alt-A loans arriving in the market--and how the product risk they brought with them pushed up the risk of default. The Urban Institute estimate is that time from about 2001-2003 can be viewed as "Reasonable Lending Standards," which implies that the very low level of expected defaults in recent years is part of an ongoing overreaction to what went so badly lwrong.

There's of course a lot more to this story of the Great Recession. But it does support a concern that when financial regulators see a large and rapid build-up of a new kind of high-risk loan, they should seriously consider putting on the brakes. And it's a reminder to investors that when asset prices shoot up rapidly, as housing prices did in the early 2000s, it's wise to start thinking about how to ensure a soft landing.

Monday, December 11, 2017

Do You Rejoice for China?

Hereby is an op-ed piece I wrote for the Star Tribune, published on Sunday, December 10.

"China's rise: The wealth of a nation (not ours)
By Timothy Taylor"

When the economic histories of our time are written, 30 or 50 or 100 years from now, I strongly suspect that the main topic of discussion will not be U.S. budget deficits and taxes, nor health insurance, nor the struggles of the European Union with the euro, and perhaps not even “globalization” writ broadly.

Instead, history will see our era defined by the extraordinary economic rise of China.

Although this rise has been happening right in front of our eyes for almost 40 years, it has changed the lives of more than a billion people in ways that are not fully appreciated. Here are a few measures of how life in China changed between about 1980 and the present, according to World Bank data:

  • The share of China’s population below the poverty line, modestly defined as having a consumption level of $3.10 per capita per day, has fallen from 99 percent of the population to 11 percent.
  •  Per capita GDP has risen from $200 per person to $8,200 per person.
  •  Life expectancy has risen from 66 years to 76 years.
  •  Infant mortality per 1,000 live births has fallen from 48 to 9.
  • The literacy rate for those 15 and older has risen from 66 percent to 96 percent.
  • The share of China’s total population over age 25 who have completed a secondary-level (high school) education has risen from 6 percent to 22 percent.
Such a list could be extended, of course. But the bottom line is that more than a billion people in China have risen out of a combination of grinding poverty, poor health and low levels of education to what the World Bank classifies as “upper middle income.” A Chinese person who was a young adult back in 1980 has observed the entire process in his or her own lifetime — and hasn’t yet reached retirement age.

So, do you rejoice for China? Adam Smith, who launched the systematic study of economics in 1776 with “The Wealth of Nations,” published an earlier tome in 1759 called "The Theory of Moral Sentiments,” which includes a meditation on how most people in the West think about the welfare of people in faraway China. Smith wrote:
"Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity.
“He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general.
“And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befal himself would occasion a more real disturbance.
“If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.”
In Smith’s spirit, one might ask: Do you rejoice that China’s economic growth has lifted hundreds of millions of people out of the most dire and terrible poverty? Or do you wish the process had been considerably more restrained, and slower? Or deep down, does a part of you sort of wish that it had not happened at all?

After all, the earthquake of China’s shift to a moderate prosperity has caused tremors throughout the world economic and political systems. A shortlist of the aftershocks would include economic dislocations experienced in communities throughout the world to wages, interest rates and communities; theft of intellectual property and technology; environmental costs, like severe air pollution experienced mostly in China as well as the global effects of China’s role as by far the leading emitter of carbon and other gases related to climate change; and political disruptions and muscle-flexing, especially with other Asian nations.

Of course, the rest of the world has also experienced positive effects from China’s economic transformation. Consumers of products with Chinese inputs have benefited from lower prices, and now are starting to benefit more from Chinese-developed technology. Investment funds from China have helped to finance U.S. government borrowing and have encouraged economic development in certain parts of Africa and Latin America.

But when thinking about China, it seems to me remarkably easy to focus on negative effects, and more generally on how China’s economic growth has affected the U.S. or other nations outside China. And in doing so, it seems remarkably easy to undervalue the transformative and improved lives of more than a billion of our fellow humans.

Back in 1759, Adam Smith argued that when thinking about the welfare of faraway people, it wasn’t going to be enough to rely on “love of neighbor” or “love of mankind.” He wrote that “it is not that feeble spark of benevolence which Nature has lighted up in the human heart, that is thus capable of counteracting the strongest impulses of self-love.”

Instead, Smith argued that people should listen to a much tougher judge than feelings of love or benevolence — namely their own conscience. He wrote:
“It is a stronger power, a more forcible motive, which exerts itself upon such occasions. It is reason, principle, conscience, the inhabitant of the breast, the man within, the great judge and arbiter of our conduct. It is he who, whenever we are about to act so as to affect the happiness of others, calls to us, with a voice capable of astonishing the most presumptuous of our passions, that we are but one of the multitude, in no respect better than any other in it; and that when we prefer ourselves so shamefully and so blindly to others, we become the proper objects of resentment, abhorrence, and execration.”
Dramatic and substantial real-world change is messy. In some ways, it was easier to be sympathetic with China back in the 1970s and early 1980s, when it was a poor country. It was picturesque, sentimental and sometimes just a little patronizing to watch some cultural dances and Chinese pingpong players, and sometimes at the end to make a moderate donation to help feed children or support schools.

But those times are done. Even with all the concerns about past side effects of China’s economic growth or future policy decisions that will need to be made, I rejoice for China.

Timothy Taylor is managing editor of the Journal of Economic Perspectives, based at Macalester College in St. Paul. He blogs at

Friday, December 8, 2017

Natural Fisheries Overtaken by Aquaculture

Fisheries are a standard example for economists of the "tragedy of the commons." For any individual fisherman, it makes sense to catch as many fish as possible. However, if all fishermen act in this way and if the number of fishermen grows substantially over time, the underlying common resource can become depleted and unable to renew itself. In fact, this scenario has actually taken place with the world's natural fisheries, where production peaked a couple of decades ago and has been stagnant or declining since then. The just-published OECD Review of Fisheries: Policy and Summary Statistics 2017  notes: "Production of wild-caught fish in OECD countries is considerably below its peak in the late 1980s and continues to decline."

There are two ways out of this box. One way is to figure out a method of limiting what fishermen catch, which would over time allow natural fishing stocks to rebuild so that the total catch could be greater in the medium- and long-run. I've written about proposals and analysis along these lines in
"Saving Global Fisheries with Property Rights" (April 12, 2016) and"More Fish Through Less Fishing" (May 10, 2017). The obvious difficulty is while would be in the broad interest of a fishing industry to have limits on what can be caught, so that the resource is preserved, the practical issues of determining who should be allowed to catch how much and enforcing such decisions can be difficult.

The other approach is to have the fish-production migrate away from wild catch, and move toward "aquaculture," in which a certain body of water is no longer a common resource, but instead is owned by a fish producer. Aquaculture appears to be on is way to surpassing natural catch. As the OECD report notes:
"Global aquaculture production already exceeds the volume of catch from wild fisheries, if aquatic plants are included. Annual average aquaculture growth in OECD countries has accelerated and now averages 2.1% per year. Globally, it is even more rapid, at 6% per year. Moreover, average prices of aquaculture products are increasing ..."
Most of the OECD report is a point-by-point overview of what is happening in individual countries. There is lots of "reviewing and revising," and "advancing reforms" and "latest major policy developments." But at least to me, it's revealing that "Countries are also working actively to promote the sustainable development of aquaculture, which is seen as the primary source of future growth in fish production." This emphasis suggests that the process of rebuilding natural stocks of fish has a long way to go.

There is also a chapter on government support for the fishing industry. In most countries, other than China, fishermen are not supported directly, but instead the industry received indirect support equal to about one-sixth of its annual production. The OECD report notes:
"The Fisheries Support Estimate (FSE) Database now inventories budgetary support to fisheries that totals USD 13 billion (EUR 11.7 billion) in 33 countries and economies in 2015. For the first time, data for the People's Republic of China (hereafter, "China") is included in the database, revealing the scale of policies in this important fishing nation. Nearly 88% of all support transferred to individual fishers recorded in the database originates in China. In a positive development, China has announced plans to progressively reduce this subsidy. For most other countries and economies in the database, support to general services to the sector, rather than transfers to individual fishers, dominate. Governments invest a significant amount of resources to this kind of support, which includes management, enforcement, research, infrastructure and marketing. On average, these expenditures by government equal 16% of the value of landings: that is, USD 1 in every 6 earned by the sector. While some governments recoup these costs from fishers, this approach is not commonly applied and accounts for only a small percentage of the total outlay on general services to the sector."
The geography and policy issues fisheries is in many ways more national and regional than truly international. But the broader management of ocean resources and ecology is a global issue, with fisheries as one measure of the health of this ecosystem.

Thursday, December 7, 2017

Why More Americans Seem Stuck in Place

One traditional stereotype of the US economy is that it includes a high degree of physical mobility of workers and families: between states, between rural to urban areas, between suburbs and inner cities, and so on. In theory, this mobility offers possibilities for adjusting to economic shocks and for seeking out opportunities, which in turn part of what makes a fluid and flexible market economy work. But in fact, Americans are moving less. David Schleicher discusses the issue in "Stuck! The Law and Economics of Residential Stagnation," appearing in the Yale Law Journal (October 2017, 127:1, pp. 78-154). He writes (footnotes omitted):
"Leaving one’s home in search of a better life is, perhaps, the most classic of all American stories. ... But today, the number of Americans who leave home for new opportunities is in decline. A series of studies shows that the interstate migration rate has fallen substantially since the 1980s. Americans now move less often than Canadians, and no more than Finns or Danes. ... [M]obility rates are lower among disadvantaged groups and that mobility has not increased despite becoming “more important” to individual economic advancement.
"More troubling still, Americans are no longer moving from poor regions to rich ones. This observation captures two trends in declining mobility. First, fewer Americans are moving away from geographic areas of low economic opportunity. David Autor, David Dorn, and their colleagues have studied declining regions that lost manufacturing jobs due to shocks created by Chinese import competition. Traditionally, such shocks would be expected to generate temporary spikes in unemployment rates, which would then subside as unemployed people left the area to find new jobs. But these studies found that unemployment rates and average wage reductions persisted over time. Americans, especially those who are non-college educated, are choosing to stay in areas hit by negative economic shocks. There is a long history of localized shocks generating interstate mobility in the United States; today, however, economists at the International Monetary Fund note that “following the same negative shock to labor demand, affected workers have more and more tended to either drop out of the labor force or remain unemployed instead of relocating.”
"Second, lower-skilled workers are not moving to high-wage cities and regions. Bankers and technologists continue to move from Mississippi or Arkansas to New York or Silicon Valley, but few janitors make similar moves, despite the higher nominal wages on offer in rich regions for all types of jobs. As a result, local economic booms no longer create boomtowns. Economically successful regions like Silicon Valley, San Francisco, New York, and Boston have seen only slow population growth over the last twenty-five years. Inequality between states has become entrenched. Peter Ganong and Daniel Shoag have shown that a hundred-year trend of “convergence” between the richest and poorest states in per-capita state Gross Domestic Product (GDP) slowed in the 1980s and now has effectively come to a halt."
Schleicher makes the argument that state and local economic policies (and a few federal ones) are major contributors to this lack of mobility. More broadly, he argues that state and local policy is often much more strongly affected by those voters already in place who prefer stability, rather than by those who have not yet moved to the area and might prefer evolution and growth.
"[S]tate and local (and a few federal) laws and policies have created substantial barriers to interstate mobility, particularly for lower-income Americans. Land-use laws and occupational licensing regimes limit entry into local and state labor markets. Differing eligibility standards for public benefits, public employee pensions, homeownership tax subsidies, state and local tax laws, and even basic property law doctrines inhibit exit from low-opportunity states and cities. Building codes, mobile home bans, location-based subsidies, legal constraints on knocking down houses, and the problematic structure of Chapter 9 municipal bankruptcy all limit the capacity of failing cities to shrink gracefully, directly reducing exit among some populations and increasing the economic and social costs of entry limits elsewhere.  ....
"A number of these policies changed substantially in ways that made populations stickier during the period when mobility fell. It is not clear whether these legal changes caused declines in mobility, or simply failed to push back against “natural” changes that reduced mobility—such as an aging population, declining churn in employment, and decreasing diversity of employers by region due to the increasing economic dominance of the service sector. But state and local policies in part dictate where people move, particularly by keeping people out of the richest metropolitan areas and best job markets. Whether as a direct cause or as mere bystanders, state, local, and federal laws therefore bear some responsibility for declining interstate mobility.... In aggregate, these local and state policies play a substantial role in creating or failing to combat the central macroeconomic problems of our time: slow growth rates, increasing inequality of wealth and income, and the difficulties of balancing inflation and unemployment. ...
However, state and local policies must answer to state and local needs, which are often in tension with broader national interests. ... [T]he structure and process of state and local government decisionmaking often overrepresents the voices of those local residents who care the most about stability and the least about growth.  State and local governments have few incentives to consider broader national economic implications when writing zoning codes or establishing public pension rules. ... Where local or state governments have the power to limit entry or reduce exit, the harm to agglomerative efficiency, and thus national economic output, is substantially increased.
Mobility has traditionally been a way of smoothing the transitions that are a part of any dynamic and growing economy. Of course, lack of mobility isn't all that's ailing US labor markets. But I think it's a meaningful contributor.

Wednesday, December 6, 2017

What Financial Risks are Lurking

The Office of Financial Research, within the US Department of the Treasury, was created by the  Wall Street Reform and Consumer Protection Act of 2010 (commonly known as the Dodd-Frank act), to provide analysis and data  for the Financial Stability Oversight Council, another creation of the same law. It's Financial Stability Report 2017 discusses some "key vulnerabilities" of the financial system.

Cybersecurity Incidents. "Cybersecurity incidents rank near the top of our threat assessment because of the potential for disruption of operational and financial networks, and the damage such disruptions could cause to financial stability and to the broader economy. Cyber incidents can affect financial stability if defenses fail."

Resolution Risks at Systemically Important Financial Institutions. The term "resolution risk" refers to what process will begin if a big financial institution becomes insolvent. The regulators are still struggling to address some possible issues. "The treatment of derivatives held by a failing financial firm continues to present a conundrum for policymakers seeking to balance contagion and run risks against moral hazard concerns. Tools for orderly resolution of failing systemic nonbank financial firms remain less developed than for banks, despite the material impact of some nonbank failures in the past and the growing importance of nonbanks, particularly central counterparties (CCPs), in the financial system."

A Single Bank Deals with all Treasury Securities. The Treasury market will soon be more dependent on a single bank for the settlement of Treasury securities and related repos. A service disruption, such as an operational risk incident or even the bank’s failure, could impair the liquidity and functioning of these markets because some customers will need time to move their operations elsewhere. It could also disrupt other markets that rely on Treasuries for pricing and funding. The 2007-09 financial crisis showed the damage that can be done if activity in short-term funding markets is constrained. Dealers in Treasury securities use clearing banks to settle Treasury cash transactions. Since the 1990s, these services have been provided by two clearing banks, JPMorgan Chase & Co. and Bank of New York Mellon Corp. (BNY Mellon). With JP Morgan Chase’s announcement in July 2016 that it intends to cease provision of government securities settlement services to broker-dealer clients, this business will be concentrated in a single bank. A disruption in BNY Mellon’s Treasury settlement could have broad implications for the Treasury market. It could disrupt trading in Treasuries. If settlement services were interrupted for an extended period, risks could spread further to markets that rely on the Treasury market for hedging and pricing."

Fragmentation of Stock Markets. In 1996, almost all stock trading happened on the main exchanges of the NYSE or Nasdaq. Now, NYSE and Nasdaq each run a number of separate exchanges, and there are 50 off-exchange stock markets. This fragmentation raises possibilities of  playing one market against another, or of liquidity failing in one market with effects cascading across other markets.

The Shift Away from LIBOR. The London Interbank Offered Rate, or LIBOR, has long been a benchmark for global financial transactions. But some will remember the scandal about a decade ago when it came to light that some traders had been making money by nudging the benchmark rate up or down. But LIBOR is no longer a good benchmark.
"Interest payments on at least $10 trillion in credit obligations and more than $150 trillion in the notional value of derivatives contracts were linked to U.S. dollar LIBOR at the end of 2013. But LIBOR is unsustainable across a number of currencies. It is based on a survey of a shrinking pool of market participants and reflects transactions in a shrinking market. Most LIBOR survey submissions are based on judgment rather than actual trades, and the rate tracks unsecured transactions, which represent a small share of banks’ wholesale funding."
A transition is underway to a new benchmark rate, the Secured Overnight Financing Rate (SOFR), which will be generated by the Federal Reserve Bank of New York. But shifting over tens of trillions of dollars in transactions from one benchmark rate to another may bring some bumps.

Risks if Low Interest Rates and Volatility Increase Risk-Taking. "The OFR has highlighted in each of our annual reports the risk that low volatility and persistently low interest rates may promote excessive risk-taking and create vulnerabilities. ... The increase in already-elevated asset prices and the decrease in risk premiums may leave some markets vulnerable to a large correction. Such corrections can trigger financial instability when important holders or intermediaries of the assets employ high degrees of leverage or rely on short-term loans to finance long-term assets. ... Equity valuations are high by historical standards. The cyclically adjusted price-to-earnings ratio of the S&P 500 is at its 97th percentile relative to the last 130 years. ... Real estate is another area of concern. U.S. house prices are elevated relative to median household incomes and estimated national rents, although these ratios are well below the levels observed just before the financial crisis. ... Valuations are also elevated in bond markets. ... Duration — the sensitivity of bond prices to interest rate moves — has steadily increased since the crisis."

Other concerns are mentioned as well, but just to be clear, the 2017 report is in no way alarmist or predicting doom. Instead, the lesson is that it's a lot better to deal with vulnerabilities at times when the financial system is not under stress or in crisis. 

Tuesday, December 5, 2017

Federal Income Taxes at the Highest Income Levels

There's an undeniable fascination with looking at the highest income levels and their tax payments. Adrian Dungan provides a glimpse in "Individual Income Tax Shares, 2014," which was published in the IRS house journal Statistics of Income Bulletin (Spring 2017, pp. 12-23).

Here's a figure showing the share of returns and the share of income taxes paid. For example, the top 1% of income tax returns in 2014 accounted for 20.6% of all income, but 39.5% of all income tax. The top 50% of all tax returns accounted for 88.7% of all income and 97.3% of all income tax. Which in turn implies that the bottom half of all tax returns accounted for 11.3% of all income and 2.7% of all income tax.

Here's a figure focused on the very upper end of this distribution. About 137 million tax returns were filed in 2014. Thus, the top 1% of those returns refers to the top 1.37 million tax returns; 0.1%, the top 137,000 returns; 0.01%, the top 13,700 returns; and 0.001%, the top 1,370 returns. The bars for the top 1% show the same numbers as in the figure above. But the top 0.001% accounts for 2.1% of all income and 3.6% of all income taxes.

What are the income levels for these different groups? The top 10% kicks in at about $130,000; the top 1% is at $460,000.
At the extreme upper end, the top 0.001% of tax returns reported income of nearly $60 million in 2014.
Finally, here's some information on the average income tax rates paid by those in the highest brackets. A few points are worth noting here: 1) This is an average tax rate, not a marginal tax bracket--so these people are paying much higher tax rates on the marginal dollar; 2) Average taxes on those with very high incomes rose in 2012; and 3) The very highest income levels of 0.01% and 0.001% have slightly lower average tax rates, probably because these very high levels of income are likely to take the form of long-term capital gains that are taxed at a lower rate than regular income.

A few words of warning are appropriate before over-interpreting the figures here. These figures and percentages apply only to federal income tax. They do not cover the federal payroll taxes that fund Social Security and Medicare, nor do they cover state and local taxes like sales, property, and income taxes. Thus, the figures do not show overall tax burden. The higher burden of income taxes on those with high income levels, as a share of their incomes, can be thought of as counterbalancing how other major taxes like sales tax and payroll taxes weigh more heavily on those with lower incomes, as a share of their incomes.

Monday, December 4, 2017

Tax Reform With Spending and Taxes at Historical Averages

It's conceptual possible, if not always practically convenient, to separate tax policy into two main  pieces. One issue is the tax cut vs. tax hike debate--that is, whether the total amount being collected should be higher or lower. The other issue is whether the tax code should be adjusted in some way to alter its incentives and disincentives. As one example, the 1986 Tax Reform Act was more-or-less neutral in the amount of revenue it collected, but it altered the incentives of the tax code by combining lower marginal tax rates with a reduction in the availability of various deductions, credits, and exemptions.

In thinking about the current tax bill, first consider the question of how US tax and spending compares with to historical levels. Here's a figure from the Congressional Budget Office report, "An Update to the Budget and Economic Outlook: 2017 to 2027" (June 2017). A little-remarked fact about the present state of the federal budget is that the level of federal spending is almost exactly at
its 50-year average of 20.3%, while the level of total federal taxes is pretty much right on its historical federal average of 17.4%. Thus, the budget deficit at present is also very close to its long-run average of 2.9% of GDP.

 When looking at a government's debt burden over time, the most useful quick metric is the ratio of total accumulated debt/GDP. Here's a CBO figure from March 2017 showing this metric for the US economy over time--and projections for the next couple of decades. The common pattern over time is that the debt/GDP ratio rises sharply during wartime, and around times of extreme economic stress like the Great Depression of the 1930s and the more recent Great Recession.

But the Great Recession ended back in June 2009, and the US unemployment rate has been 5% or lower for more than two years, since September 2015. Moreover, the long-term projections from CBO suggest that existing government programs are going to exert very large pressures for higher government debt in the next couple of decades, as the boomer generation retires and health care costs continue to rise. When (and not if) the next recession arrives, it will be a good time to run larger deficits again. But the case for a tax cut to stimulate the US economy that reported a 4.1% unemployment rate in October 2017 is weak.

What about the effects of the tax bill on economic incentives?  I sometimes use the analogy that economies carrying  a tax burden are similar to a hiker carrying gear for a back-country excursion. If the hiker has a well-fitted and well-padded backpack, with the weight nicely distributed, it's a lot easier to hike all day. If you took the exact same camping gear and randomly attached it to hiker around their body--some on the feet, the heaviest weight on the right arm and nothing on the left arm--that same amount of weight becomes very difficult to carry. Thus, the question of tax reform is not whether the burden should be higher or lower, but rather how best to distribute a given amount of weight.

There are of course lots of estimates of how the tax bill will affect incentives, but the estimates of the Joint Committee on Taxation are especially worthy of notice. Because Republicans control Congress, that party also controls the Joint Committee on Taxation. However, many staff members of the JCT soldier on from one administration to the next, showing both some willingness to be flexible as their political guidance changes, but also showing some stubbornness in insisting on a certain level of consistency and logic in their estimates. Thus, economists who tend to align with the Democratic party like Larry Summers, Jason Furman, and Paul Krugman have all been willing to cite the JCT estimates as a reasonable basis for discussion (although I'm sure they also disagree with these estimates in various ways). 

Here are some comments from the JCT report. "Macroeconomic Analysis of the“Tax Cut and Jobs Act” as Ordered Reported by the Senate Committee on Finance on November 16, 2017" (November 30, 2017):
We estimate that this proposal would increase the level of output (as measured by Gross Domestic Product) by about 0.8 percent on average over the 10- year budget window. That increase in income would increase revenues, relative to the conventional estimate of a loss of $1,414 billion ... by $458 billion over that period. This budget effect would be partially offset by an increase in interest payments on the Federal debt of about $50 billion over the budget period. We expect that both an increase in GDP and resulting additional revenues would continue in the second decade after enactment, although at a lower level, as many of the provisions that are expected to increase GDP within the budget window expire before the second decade.
Thus, this estimate incorporates a moderate version of the Republican believe that the tax cut will boost growth, but even after adding such an effect, taxes are estimated to be about $1 trillion lower over 10 years. What about more specific changes to the individual income tax? The JCT report summarizes the main changes in this way:

"The bill changes individual income tax rates, lowering the top individual income tax rate from 39.6 percent to 38.5 percent, creating an additional individual income tax rate bracket, and lowering statutory tax rates for most tax rate brackets, while changing the measure used to adjust the brackets for inflation from the present law consumer price index (“CPI-U”) to the chained consumer price index (“chained CPI”). The chained CPI grows more slowly than the CPI-U, thus resulting in people over time moving into higher rate brackets at a faster rate under the bill than under present law. The bill also reduces individual shared responsibility payments for failure to obtain qualified health insurance coverage enacted as part of the affordable care act to zero. At the same time, the proposal eliminates a number of deductions and credits from their individual taxable income while increasing others. The biggest changes include eliminating personal exemptions while increasing the standard deduction, and increasing the maximum amount of the child tax credit while increasing the income range over which individuals may claim it." 
Thus, while the bill does reduce taxes at high income levels, that doesn't seem to me the main thrust of the bill. The cost of the dramatic rise in the standard deduction and to a lesser extent in the child tax credit is very high. To me, one of the most interesting dimensions of this change is that with a much higher standard deduction, many fewer taxpayers would find it worthwhile to  itemize deductions. Thus, if or when proposals resurface a few years from now to reduce popular deductions like the one for home mortgage interest or state and local taxes, many fewer people will be using those deductions, and the political calculus around them may shift.

The bill also shifts business taxation, with a goal of reducing corporate tax rates and encouraging firms to repatriate earnings now held abroad. It's hard to remember amidst the political din, but these were also announced goals of the Obama administration. For example, a joint report from the Obama White House and the Department of the Treasury in April 2016 called "The President’s Framework for BusinessTax Reform: An Update," included comments like: 
"The Framework would eliminate dozens of different tax expenditures and fundamentally reform the business tax base to reduce distortions that hurt productivity and growth. It would reinvest these savings to lower the corporate tax rate to 28 percent, putting the United States in line with major competitor countries and encouraging greater investment in America. ... Our tax system should not give companies an incentive to locate production overseas or engage in accounting games to shift profits abroad, eroding the U.S. tax base."
For comparison, here's the JCT description of the corporate tax changes in the Senate version of the tax reform plan:
"In addition, the bill lowers the corporate income tax rate from 35 percent to 20 percent beginning in 2019; and, it increases the rate of bonus depreciation to 100 percent while extending it for five years, from 2018 through 2022. The bill also repeals or limits deductions for a number of business expenses, the largest of which is a 30 percent limit on interest deductibility. Finally, the bill makes significant changes to the taxation of both foreign and domestically controlled multinational entities. It would allow domestic corporations to receive a dividend from their foreign subsidiaries without incurring United States tax on the income. It also creates a new minimum tax for certain related party transactions in order to reduce the erosion of the United States corporate income tax base. In a further effort to reduce base erosion, it equalizes the tax treatment of specified high return income from foreign sales whether they are earned through a foreign corporation or a domestic corporation."
There are clear differences between the plans, of course. The Obama administration was talking about cutting the corporate tax rate to 28%, not 20%. In addition, the Obama plan emphasized that changes to corporate taxes should be revenue-neutral. But on other other side, the Obama proposal is a white paper, not actual legislation, which means that it had not been put through the Congressional meat-grinder where seemingly every legislator is demanding a sweet tidbit of their own devising in exchange for supporting the bill.

Assuming this tax bill moves forward and becomes law in essentially its current form, one of the most interesting aspects to keep track of will be its effect on investment. There is a widespread fear that ongoing low levels of investment are slowing US economic growth, both in the short-run and the long-term. A common solution proposed by Democratic-leaning economists has been to support a high level of infrastructure spending, and before President Trump was elected, it was common to hear arguments pointing out that if an infrastructure investment could be financed at today's low interest rates, and if that infrastructure investment brought a long-term payoff, it would be economically sensible to undertake the project even if it increased short-run budget deficits. In effect, the current Republican tax bill repurposes that argument into a claim that if certain tax changes call forth  sufficient private sector investment, then it is worth increased budget deficits as well.

This already overlong blog post isn't the place to try to sort through the merits of public-sector and private-sector investment, and whether the kind of politically-driven infrastructure spending on roads and bridges that typically bubbles up through Congress is the most productive way to build a strong base for the US economy in the 21st century. I think it might be even more useful to consider an infrastructure agenda applying to energy resources and to  data networks, and for hardening this infrastructure against physical- and cyber-attack. But focusing just on the Republican tax plan, the additional budget deficits seem certain to be very high and the promised investment benefits seem relatively small and uncertain.

Friday, December 1, 2017

Is Job Disruption Historically Low in the US Economy?

Discussions of how advances in technology, trade, and other factors lead to disruption of jobs often seems to begin with an implicit claim that it was all better in the past, when the assumption seems to be that most workers had well-paid, secure, and life-long jobs. Of course, we all know that this story isn't quite right. After all, about one-half of US workers were in agriculture in 1870, down to one-third by early in the 20th century, and less than 3% since the mid-1980s. About one-third of all US nonagricultural workers were in manufacturing in 1950, and that has now dropped to about 10%. These sorts of shifts suggest that job disruption and shifts in occupation have been a major force in the US economy throughout its history.

Indeed, Robert D. Atkinson and John Wu argue that the extent of job disruption was higher in the US economy in the past in "False Alarmism: Technological Disruption and the U.S. Labor Market, 1850–2015," written for the Information Technology & Innovation Foundation (May 2017). They write:
"It has recently become an article of faith that workers in advanced industrial nations face almost unprecedented levels of labor-market disruption and insecurity. ... When we actually examine the last 165 years of American history, statistics show that the U.S. labor market is not experiencing particularly high levels of job churn (defined as the sum of the absolute values of jobs added in growing occupations and jobs lost in declining occupations). In fact, it’s the exact opposite: Levels of occupational churn in the United States are now at historic lows. The levels of churn in the last 20 years—a period of the dot-com crash, the financial crisis of 2007 to 2008, the subsequent Great Recession, and the emergence of new technologies that are purported to be more powerfully disruptive than anything in the past—have been just 38 percent of the levels from 1950 to 2000, and 42 percent of the levels from 1850 to 2000. ...
"Indeed, if we could go back in time and ask someone in 1900 about the pace of technological change, they would likely tell a similar story about its acceleration, citing the proliferation of amazing innovations (e.g., cars, electric lighting, the telephone, the record player). But notwithstanding iconic innovations such as electricity, the internal combustion engine, the computer, and the Internet, change is almost always more gradual than many think. Indeed, as historian Robert Friedel notes, “even the technological order seems more characterized by stability and stasis than is often recognized.” And as discussed below, that is likely to be the case regarding technology-induced labor market change."
The paper is packed with examples of American jobs that have boomed and then diminished over time. The number of workers on railroads boomed in the late 19th century, but fell throughout the 20th century. "Seventy years ago, tens of thousands of young men and boys worked in bowling alleys as pinsetters, setting up the pins after the bowlers had knocked them down." More than 110,000 people were employed as elevator operators in 1950. The number of motion picture projectionists fell from almost 25,000 in 1970 to about 3,000 today. The number of automobile mechanics peaked at over 1.8 million in 2000, but had fallen by over 300,000 to about 1.5 million by 2010--mainly because improvements in auto quality made a lot of mechanics obsolete. "For example, while 180,000 Americans were employed as travel agents at the turn of the millennium, with the emergence of Internet-based travel booking, just over 90,000 were employed in 2015. Likewise, there are 57 percent fewer telephone operators, 41 percent fewer data-entry clerks, and 3 percent fewer postal-mail carriers than there were in 2000, even though the volume of information transactions has grown, all because of digital automation and substitution."

The review isn't exhaustive: for example, the paper doesn't mention that in the late 1940s, AT&T employed more than 350,000 switchboard operators, or that the  number of telephone operators who provided phone numbers and connected calls used to be in the tens of thousands just a few decades ago.

But of course, a pile of examples isn't always fully persuasive; as the social scientists like to say, the plural of "anecdotes" is not "data." But it's worth remembering that even in periods when the US economy is pretty much universally acknowledged to have been running on high, like the 1960s, there was considerable turnover in job categories and occupations. They write:
"For example, in the 1950s and 1960s, many occupations grew extremely fast, even after controlling for employed worker growth. For example, in the 1960s, 885,000 janitors were added as offices expanded, 700,000 nursing aides as health-care consumption increased, and 600,000 secondary-school teachers as today’s baby boomers started to enter high school. At the same time, many occupations either declined outright or grew much more slowly than overall labor-force growth. For example, office-machine operators (except computers) fell by over 400,000; office clerks fell by 1.8 million; material moving workers fell 1.5 million; and other production workers fell by 1.9 million workers, as manufacturers increased automation."
Is there some way to get a systematic handle on the amount of occupational change in the US economy over time. As you might expect, the available data is limited as one goes back in time, but there is the Census. Thus, Atkinson and Wu suggest a number of measures of occupational change. For example:

"The first measures change in each occupation relative to overall occupational change. With this method, even if an occupation doesn’t lose jobs, if it didn’t grow as fast as the overall labor market, the delta between that growth and overall labor force growth would be calculated as churn. In other words, if a particular occupation grew 4 percent in a decade but the overall number of jobs grew 10 percent, the rate of change would be negative 6 percent. Likewise, if employment in an occupation grew 15 percent in a decade, but the overall number of jobs grew 10 percent, the rate of change would be 5 percent. Absolute values were taken of negative numbers, and the sum of employment change was calculated for all occupations. This was then divided by the number of jobs at the beginning of the decade to measure the rate of churn. ... 
"The findings are clear: Rather than increasing, the rate of occupational churn in the last few decades is the lowest in American history, at least since 1850. Under method one, using the occupational categories of 1950, occupational churn peaked at over 50 percent in the decades between 1850 to 1870. (See figure 7.) But it was still above 25 percent for the decades from 1920 to 1980. In contrast, it fell to around 20 percent in the 1980s and 1990s, to just 14 percent in the 2000s, and 6 percent in the first half of the 2010s."

This specific measure is surely rough-and-ready, and so the authors offer some other approaches along these general lines. The same lesson keeps coming up. The dramatic shifts in agricultural jobs from the 19th century into the 20th century, the rise and fall of manufacturing jobs, and many other shifts in technology and trade have been causing the US economy to have a high level of occupational shifts for a long time. Since the start of the 20th century, the level of occupational shifts has actually been relatively low.

This analysis cuts against conventional wisdom. But it does fit in a broad sense with some other evidence: for example, the evidence that Americans are moving less, or that job losses are a share of total US employment (which are always happening in the movement and churn of the US economy) are on a downward trend. Here's one more figure from Atkinson and Wu:

There are lots of reports out there about how technology will affect the jobs of the future, ranging from the sensible to the weirdly apocalyptic. A good sensible example is the recent report from McKinsey on "What the future of work will mean for jobs, skills, and wages" (December 2017). There's lots of useful and thought-provoking analysis on what jobs will change, in what ways, in what countries. But one bottom line of the analysis is an estimate that overall, "Our scenarios suggest that by 2030, 75 million to 375 million workers (3 to 14 percent of the global workforce) will need to switch occupational categories." The numbers are big. But that degree of occupational change over the next dozen or so years is not at all unprecedented.