Saturday, July 30, 2016

E.g., Abandon i.e. and Other Latinisms, etc.

I feel so vindicated.

Someone named Persis Howe recently blogged at a UK government website that the style-guide for the gov.uk website is being updated to recommend avoiding the use of e.g., i.e., and etc. For the last 30 years I've been editing articles at the Journal of Economic Perspectives, I've been discouraging writers from using these terms, too. Howe notes that a growing number of people are having content read to them by audio programs, which often mangle these terms. She also writes:
We promote the use of plain English on GOV.UK. We advocate simple, clear language. Terms like eg, ie and etc, while common, make reading difficult for some. Anyone who didn’t grow up speaking English may not be familiar with them. Even those with high literacy levels can be thrown if they are reading under stress or are in a hurry - like a lot of people are on the web.
Of course, now that my desire to purge plain English of Latinisms has won this small victory, my horizons are expanding. A number of economists apparently have a psychological need to write "ex ante" and "ex post," rather than using the vocabulary of commonplace words that describe sequences in either temporal or expectational terms, including "beforehand" or "expected" or "before the fact," as well as "afterwards" or "realized" or "occurred later."  And yes, these issues actually raise the blood pressure of pedants like me.

Friday, July 29, 2016

Public Pensions on Shaky Ground

The stock market run-up of the 1990s was fool's gold for many state and local pension funds. At the height of the dot-com boom, the typical pension fund had enough on hand to cover all of its expected future costs. But booms don't tend to last, and that one didn't, either. There are a couple of short recent reports that offer a useful update on the current status of public pension funds. One is the "Issue Brief" by Alicia H. Munnell and Jean-Pierre Aubry called "The Funding of State and LocalPensions: 2015-2020," published by the  Center for State & Local Government Excellence in June 2016. The other, by William G. Gale and Aaron Krupkin, is called "Financing State and Local Pension Obligations:Issues and Options," and was published as a Brookings Institution Working Paper in July 2016.

The basic calculation here is to look at the assets that public pension funds have on hand, and to compare that amount with what is needed to pay the benefits that have already been promised to be paid in the future. To make this calculation, you need to make a guess about what rate of return will be earned by the assets currently on hand. A typical current estimate is that pension fund assets will earn an average of 7.6% per year for decades to come. We'll return to the realism of that number in a moment. But using it as a given, Munnell and Aubry present calculations that the average state and local pension fund has on hand about 74% of what it needs to pay the benefits that have already been promised. (The exact percentage varies a bit because of some alternative accounting rules.)

Notice that when the stock market peaked right around 2000, there was a golden moment when public pension funds were fully funded. But rather than build on that moment, by assuring that the funds would remain fully funded into the future, a number of state and local governments saw this as a chance to promise higher pension benefits and to make lower contributions to pension funds. Apparently these actions were acceptable both to elected officials and to the leadership of public employee unions. And now here we are.

Of course, this overall average is mixture of some funds that are doing better, and some doing worse. For example, here are examples of public pension funds that are less than 50% funded in 2015: Arizona Public Safety Personnel (49% funded), Chicago Municipal Employees (37%), Chicago Police (27%), Connecticut State Employees Retirement system (43%), Illinois State Employees Retirement System (36%), Kentucky Employee Retirement System (22%), and the Philadelphia Municipal Retirement System (44%).

And what if the pension fund assets on hand don't earn 7.6% per year? As Munnell and Aubry write: "Public pensions currently hold about 70 percent of their assets in risky investments, including more than half of their assets in equities. As discussed, on average, plans assume a nominal return of 7.6 percent on their whole portfolios, which implies nominal stock returns of 9.6 percent. In contrast, many investment firms project much lower equity returns ..."

By their calculations, if one assumes only a 6% annual rate of return on pension fund assets going forward, then the average funds are only 58% funded at present. And if one assumes that the rate of return on pension fund assets is only 4% per year going forward, then the average fund is only 45% funded at present. Of course, those pension funds with below-average funding at present would be even worse off if returns are lower than hoped-for. As Munnell and Aubry write: "What happens from here on out depends very much on investment performance."

Assuming that we will not have the pleasant surprise of very high investment returns that rescue the pension funds, what policy options are possible? Gale and Krupkin walk through the choices. As they write: "The implications of the projections are unpleasant but straightforward. Governments that face significant shortfalls will have to cut employee benefits, raise employee contributions, or finance higher employer contributions with tax increases or spending cuts. The eventual changes may be modified (or hidden) by pension reform, but the basic direction of the required changes is clear."
Ultimately, the argument is over who will bear the shortfall in public pension funds. The candidates are current retirees, current government workers, previous government workers who haven't yet claimed benefits, or the public through either taxes that are higher or government services that are lower than they would otherwise be.

I don't have a good sense of what sort of deals should be cut when some public pension funds inevitably can't fulfill their promises. But I do know that a hefty dose of the blame should go to the specific state or local coalitions of elected officials, public sector unions, and voters that found it easy to promise future payments, but apparently impossible to assure that sufficient funds were put aside for those promises. In many places around the country, public pension funds have been run with a reasonably high degree of responsibility (if not always quite as prudently as I would have preferred if I was a state employee), or where meaningful reforms in the direction of pension solvency have already been undertaken.  But where the behavior has been imprudent, those state and local governments should have to face their own voters, and those public unions should have to face their own workers.

Thursday, July 28, 2016

Adam Smith on Human Capacity for Self-Deceit

Adam Smith offered a characteristically pungent insight on the subject of the human capacity for self-deceit in his 1759 book, The Theory of Moral Sentiments. I quote here from the always-useful 1790 edition available online at the Library of Economics and Liberty website. Here's Smith in TMS  (1759 [1790], part III, Ch. 1).
"The opinion which we entertain of our own character depends entirely on our judgments concerning our past conduct. It is so disagreeable to think ill of ourselves, that we often purposely turn away our view from those circumstances which might render that judgment unfavourable. He is a bold surgeon, they say, whose hand does not tremble when he performs an operation upon his own person; and he is often equally bold who does not hesitate to pull off the mysterious veil of self-delusion, which covers from his view the deformities of his own conduct. Rather than see our own behaviour under so disagreeable an aspect, we too often, foolishly and weakly, endeavour to exasperate anew those unjust passions which had formerly misled us; we endeavour by artifice to awaken our old hatreds, and irritate afresh our almost forgotten resentments: we even exert ourselves for this miserable purpose, and thus persevere in injustice, merely because we once were unjust, and because we are ashamed and afraid to see that we were so. … 
"This self-deceit, this fatal weakness of mankind, is the source of half the disorders of human life. If we saw ourselves in the light in which others see us, or in which they would see us if they knew all, a reformation would generally be unavoidable. We could not otherwise endure the sight."
It may be that modest degree of self-deceit about our own capabilities and appearance helps many of us to get up in the morning and face the day. But self-deceptions have an unpleasant habit of colliding with reality, sooner or later. One hopes that those collisions with reality can be gentle, and that they can be an opportunity for what Smith called "a reformation" and what we now label as a "personal growth opportunity." But it's easy to think of situations where people become so highly invested in self-deception about their own conduct that, when the collision with reality occurs, they push back with anger and counter-accusations and retreat further into their self-deceit, rather than engaging in self-examination. Indeed, I offer as a hypothesis that 21st-century culture may in various ways encourage self-deceit over self-examination. 

Wednesday, July 27, 2016

International Trade as a Scapegoat

The ferocity of some of the arguments in the US over global trade can be a little surprising to me. After all, the US economy with its enormous internal market is considerably less exposed to international trade than the world average. Here's a figure generated from the World Bank website showing imports/GDP for the world economy as a whole, and the US economy.  For the world economy on average, the import/GDP ratio is approaching 30%; for the US economy, the import/GDP ratio is higher than it ways, but still half the global level.

Moreover, in a Gallup poll earlier this year, a strong majority of Americans are more likely to perceive foreign trade as an opportunity for growth than as a threat. Moreover, the pro-trade majority has been rising since 2008. I do need to add in passing that pretty much all economists would view the specific Gallup question, which assumes that exports benefit the US economy and imports threaten it, as a fundamentally wrong-headed view of why an economy benefits from trade. But there are similar pro-trade majorities in other polls, like this recent NBC News/Wall Street Journal poll,



Doug Irwin offers a useful overview of the pro-trade position in his essay in the July/August 2016 issue of Foreign Affairs, "The Truth About Trade: What Critics Get Wrong About the Global Economy." He writes: "By and large, the United States has no major difficulties with respect to trade, nor does it suffer from problems that could be solved by trade barriers. What it does face, however, is a much larger problem, one that lies at the root of anxieties over trade: the economic ladder that allowed previous generations of lower-skilled Americans to reach the middle class is broken."

My usual way of making this point is to argue that international trade, and in particular arguments over the details of trade agreements, is an easy scapegoat for more profound and harder-to-tackle economic dislocations. I'd be delighted if America's economic issues could be resolved by, say, renegotiating or just not signing some trade agreement. But I don't believe. it.

Irwin works through many of the concerns raised about international trade, like those who mistakenly believe that the US trade deficit was a main cause of job loss during the Great Recession. He writes:
In fact, the trade deficit usually increases when the economy is growing and creating jobs and decreases when it is contracting and losing jobs. The U.S. current account deficit shrank from 5.8 percent of GDP in 2006 to 2.7 percent in 2009, but that didn’t stop the economy from hemorrhaging jobs. And if there is any doubt that a current account surplus is no economic panacea, one need only look at Japan, which has endured three decades of economic stagnation despite running consistent current account surpluses.
But to me, Irwin's key point is that the big underlying issue disrupting the US economy is technological change. Indeed, the changes in communication, logistics, transportation, and information processing are dramatically altering the US economy all by themselves. Indeed, developments in technology are a large part of what make global supply chains viable in the first place.International trade is part of the picture, too, but competition from robots and computers are ultimately a bigger disruptor than competition from workers in China or India.  Irwin writes:
"Although imports have put some people out of work, trade is far from the most important factor behind the loss of manufacturing jobs. The main culprit is technology. Auto­mation and other technologies have enabled vast productivity and efficiency improvements, but they have also made many blue-collar jobs obsolete. One representative study, by the Center for Business and Economic Research at Ball State University, found that pro­ductivity growth accounted for more than 85 percent of the job loss in manufacturing between 2000 and 2010, a period when employment in that sector fell by 5.6 million. Just 13 percent of the overall job loss resulted from trade, although in two sectors, apparel and furniture, it accounted for 40 percent.
"Although the United States boasts a highly skilled work force and a solid technological base, it is still the case that only one in three American adults has a college education. In past decades, the two-thirds of Americans with no postsecondary degree often found work in manufacturing, construction, or the armed forces. ... Over time, however, these opportunities have disappeared. Technology has shrunk manufacturing as a source of large-scale employ­ment: even though U.S. manufacturing output continues to grow, it does so with many fewer workers than in the past. Construction work has not recovered from the bursting of the housing bubble. And the military turns away 80 percent of applicants due to stringent fitness and intelligence requirements. There are no comparable sectors of the economy that can employ large numbers of high-school-educated workers.
"This is a deep problem for American society. The unemployment rate for college-educated workers is 2.4 percent, but it is more than 7.4 percent for those without a high school diploma—and even higher when counting discouraged workers who have left the labor force but wish to work. These are the people who have been left behind in the twenty-first-century economy—again, not primarily because of trade but because of structural changes in the economy. Helping these workers and ensuring that the economy delivers benefits to everyone should rank as urgent priorities. But here is where the focus on trade is a diversion. Since trade is not the underlying problem in terms of job loss, neither is protectionism a solution."
I confess that I have my "stop the world, I want to get off" moments. But the rest of the world economy isn't going to stop. Whether or not the disruptive effects of technology continue to moves ahead in the US, technology is going to be developed and adopted elsewhere. If the US decides to take political actions to reduce its exposure to foreign trade, other countries are going to keep signing such agreements and building global supply chains. There are real challenges for how to create  ladders of opportunity for successful careers--not just jobs paid by the hour--for workers across the spectrum of education and skills. And there are legitimate policy disputes concerning the fine print of what's in various trade agreements. But backing away from technology and the global economy is not a successful path to prosperity.

Tuesday, July 26, 2016

High-Skilled Immigration

On one side, there seems to be near-universal agreement that the US economy would benefit from workers who had higher skill levels. But if the rising skill levels are generated by the  immigration of high-skilled workers, this consensus can become wobbly.  The National Academy of Sciences offers a useful overview of these issues in Immigration Policy and the Search for Skilled Workers: Summary of a Workshop, published late in 2015. As the title implies, this report is a description of a conference, and most of the report is in the form of having the rapporteurs, Gail Cohen, Aqila Coulthurst, and Joe Alper, paraphrases presentations made at the conference.

High-skilled immigration is tied both to education and to the labor market: if a country like the United States welcomes foreign students to American colleges and universities, as undergraduates, graduate students, and faculty, there will inevitably be more situations where US-based companies want to hire this foreign-born but geographically available talent. Here are a couple of illustrative figures from an presentation by Lindsay Lowell. The left-hand panel shows that the US attracts by far the largest number of international students in total terms. The right-hand panel shows that when focusing just on science, technology, engineering, and mathematics students, the US is still near the top in the percentage of those students who are international.
One result of this influx of foreign talent is that the enormous US economy, shown by the red dotted line below, is among the world leaders in the share of its workforce who fall into the broad job category of "researchers"-- which is presumably a good thing in the coming knowledge economy.

Richard Freeman described this education-to-employment connection for technology-based skills in his presentation, paraphrased like this:

U.S. National Science Foundation estimates that 63 percent of all post-doctoral STEM students working in U.S. universities are international students, and that 49 percent of international post-doctoral fellows received their PhDs in the United States. There has been a corresponding increase in the number of scientific papers coming from U.S. laboratories that have Chinese co-authors or coauthors from other emerging economies. These international students are not merely getting an education in the United States—they are also becoming U.S. STEM workers after graduation. In 2005, over a third of all STEM workers with PhDs were foreign born, with 64 percent receiving their PhD from U.S. universities. Over a quarter of U.S. STEM workers with Master’s degrees were born in another country and 15 percent of foreign-born STEM workers with Master’s
degrees received that degree in the United States. According to a different dataset, the percentage of foreign-born workers in U.S. STEM jobs increased from 11 percent to 19 percent between 1990 and 2011 for those with Bachelor’s degrees, from 19 percent to 34.3 percent for those with Master’s degrees, and from 24 percent to 43 percent for PhDs.
Lowell noted that after STEM fields, business was the next most-popular field for high-skill immigrants. Here's a paraphrase: "After STEM fields, business was the most popular subject of study for international students in the United States during that period. The impact of a large number of business students may be substantial on growth, because it is often the business majors who take advantage of ideas and bring them to market ..."

The economics of immigration involves evaluating a set of tradeoffs. Do immigrants help the economy to grow, for example by allowing native workers in the economy to specialize in ways that potentially raise productivity and wages for everyone? Or do immigrants only compete for existing jobs in a way that reduces job prospects and standard of living for native workers? As William Kerr points out in his presentation, there are different historical examples of each of these. A study of the chemists who fled Nazi Germany for the United States suggest that they helped the US chemical industry to grow substantially. A study of the wave of Russian mathematicians who came to the US in the 1990s suggests wages and job opportunities for native-born US mathematicians were reduced as a result.

When looking only at high-skill immigration, it seems clearly beneficial to an economy to have immigrants who are also gifted entrepreneurs, building companies that provide jobs and secure high-wage employment. Moreover, there seem to be what economists call "agglomeration effects" in technology, where a group of people with interrelated technical skills all come together in one place, there can be an ongoing growth of innovation and production that exceeds what this group would have accomplished if they were dispersed. To put this in concrete terms, it's a good thing for the US economy that the Silicon Valley agglomeration, which relies heavily on an influx of technical and business talent from all around the world, is located in this country.

The less clear-cut case involves what might be called undistinguished high-skill immigrants--that is, someone who is at best an average computer programmer or laboratory researcher. By definition, the undistinguished are less likely to create companies or be a key ingredient in an agglomeration. However, they may well compete with average native high-skill workers for jobs and wage. But here as well, the question is whether high-skilled immigrants may in some ways be complementary with high-skilled native labor.

A lot of the NAS report considers public policies from different countries about high-skilled immigration. The US stands out as a country that has not been especially encouraging to high-skilled immigration, but seems to get a disproportionate share of those immigrants nonetheless. As the report points out, in the United States, about 70% of immigration is family related, another 15% is humanitarian, and the remaining 15% is employment-based (which includes temporary high-skilled immigrants). In Canada and Australia, by contrast, about 30-40% of immigration is family-based or humanitarian, and the remaining 60-70% is employment-based. But as Lowell noted (according to this paraphrase), the US still does very well in the global contest for talent:
"Another indication of how well the United States is competing for international STEM workers comes from data on the number of high-skilled foreign-born workers in the 20 leading destination nations. From 1980 to 2010, the percentage of high-skilled migrants living in the United States relative to the other top destinations rose from 46 percent to 49 percent, even as the total number rose by more than four-fold. Similarly, data from the World Intellectual Property Organization showed that from 2001 to 2010, the flow of inventors around the world was dominated by flow into the United States, while OECD data shows that the United States remains the main destination for international
authors of scientific papers."
Pia Orrenius made the point that while the US immigration system for attracting high-skill immigrants is not especialy welcoming, the US makes up for it by being more welcoming to high-skill immigrants in other ways. Here's a paraphrase:
Immigration policy is just one tool of many that can result in a better, more qualified, nimble and innovative workforce. Luckily for the United States, the nation does well in other areas—the quality of our institutions of higher education, the salaries that U.S. employers pay, the flexible labor markets with many job opportunities, and the relative ease with which foreign workers integrate in the U.S. workforce, among others—that enable the country to be competitive in the international market for high-skilled workers.
In the past, policy arguments over high-skilled immigration have often been jumbled together with overall arguments about comprehensive immigration reform, but the issues raised are not the same. Higher education is expanding dramatically around the world, emerging-market economies are growing more rapidly than the world average, and global talent pool is expanding quickly, too. Competition for where these workers choose to locate will be real and ongoing. But in the 21st-century global economy, only some of these high-skill workers will not be planning to immigrate permanently. Many other will be seeking to make connections and build experience, and then moving elsewhere. In this sense, the policy issues of  high-skilled immigration are often not about permanent migration, but instead are about flexibility of work arrangements and geographic locations in an interconnected world.

At the NAS conference, Madeleine Sumption offered the intriguing thought that the US system of enticing high-skill immigrants through a mixture of educational and business opportunities, along with temporary work visas, may be the broadly the right approach for talent in the global economy. But in her view, the existing US approaches to high-skill migrants needs an overhaul with a big dose of additional flexibility. Sumption said: "The U.S. has the right model, it is just falling apart. ... We need to fix that model rather than think of something totally new.”

Thursday, July 21, 2016

An Update on Costs of End-of-Life Care

For those interested in the health care costs in end-of-life care, Medicare data are the obvious place to look.Of the 2.6 million people who died in the U.S. in 2014,  2.1 million, or eight out of 10, were people on Medicare, making Medicare the largest insurer of medical care provided at the end of life. Spending on Medicare beneficiaries in their last year of life accounts for about 25% of total Medicare spending on beneficiaries age 65 or older." Juliette Cubanski, Tricia Neuman, Shannon Griffin, and Anthony Damico make this point at the start of their short "data note" entitled "Medicare Spending at the End of Life: A Snapshot of Beneficiaries Who Died in 2014 and the Cost of Their Care" (July 2016, published by the Kaiser Family Foundation). "

Average health care spending for Medicare recipients who died in 2014 was $34,529, nearly four times as high as the average Medicare spending of $9,121 for those who didn't die. This general pattern isn't surprising: after all, those who die often tend to have health issues beforehand.  The detailed data shows that the biggest part of this cost difference is driven by higher spending for in-patient care in hospitals for those who died in 2014. What's interesting to me is that the share of Medicare spending going to those who die in that year seems to be diminishing.

Figure 3: The share of total traditional Medicare spending on traditional Medicare beneficiaries who died at some point in the year has declined over time

What explains this shift? The report lists these causes:
"In addition, we find that total spending on people who die in a given year accounts for a relatively small and declining share of traditional Medicare spending. This reduction is likely due to a combination of factors, including: growth in the number of traditional Medicare beneficiaries overall as the baby boom generation ages on to Medicare, which means a younger, healthier beneficiary population, on average; gains in life expectancy, which means beneficiaries are living longer and dying at older ages; lower average per capita spending on older decedents compared to younger decedents; slower growth in the rate of annual per capita spending for decedents than survivors, and a slight decline between 2000 and 2014 in the share of beneficiaries in traditional Medicare who died at some point in each year."
(A couple of notes here: 1) The graph and all the data here refer to "traditional Medicare," which is the two-thirds of Medicare recipients who are not in "Medicare Advantage" plans. In traditional Medicare, the government pays health care providers on a fee-for-service basis, and thus has good data on what the costs were for services each year. In Medicare Advantage, Medicare makes monthly insurance-like payments to a managed care organization--like a health maintenance organization--and so the government does not have readily available data on the costs of what actual health care was provided at any given time. 2) The 13.5% in the graph above for 2014 doesn't match the 25% at the top. The difference is that this figure looks at the health care costs incurred in 2014 for those who died in 2014. The 25% figure refers to health care costs incurred in the 12 months before death--which usually reaches back into the previous year. For looking at trends, either approach can work fine, but plotting data for costs in the 12 months before death and comparing it to other spending in the same time interval is a more complicated tas, and the official data is organized on an annual basis, so that's what is reported here.)  

A misconception which seems popular, at least based on the kind of questions I hear, is that end-of-life spending is especially high for the very elderly. That doesn't seem to be true. This figures shows spending of those who died in 2014 by age: for example, Medicare spending on 65 year-olds who died in 2014 averaged $38,840, while for those over age 100 it averaged $14,985. Conversely, Medicare spending on those who survived 2014 tends to rise by age.
Figure 9: Medicare per capita spending for decedents over age 65 declined with age in 2014, while spending for survivors increased
This pattern seems like a positive one to me, in the sense that I suspect there is more that health care can do for the average person who is 65 or 70, compared to the average person who is 100 or 105. A more detailed breakdown of this data shows that when just looking at health care costs of those who died in 2014 by age, those who were in their late 60s had much higher expenditures on in-patient hospital costs (the orange bars), while the older age groups tended to have higher spending on hospice or skilled-nursing facility care. 
Figure 10: Medicare spending declined with age for decedents over age 65 in 2014, mainly due to lower inpatient hospital spending

End-of-life patients do tend to be high-cost patients, and in general terms, that pattern seems appropriate. But I've written before that a main goal for end-of-life care, shared both by many patients and health care professionals, is to make greater use of hospice, skilled-nursing, and at-home care at the end of life, rather than intensive care units in a hospital setting. The evidence shows that over time, the costs of end-of-life care are a diminishing share of US health care spending, and it is consistent with the belief that a shift toward greater use of hospice and other options at the end of life is gradually underway.

Wednesday, July 20, 2016

Public Higher Ed: State Support Down, Tuition Up

State and local financial support for higher education is falling, and the share of costs covered by student tuition is rising. Perhaps not coincidentally the number of students enrolled in US public higher education is has fallen in the last few years. That's the evidence from the State Higher Education Executive Officers Association in it annual report, State Higher Education Finance 2015, released in April

The report notes: "In 2015, states invested $81.8 billion in higher education ... Local governments invested $9.1 billion from property tax revenue in 2015 primarily for local district community colleges." Here are some estimates over the last quarter-century, from 1990 to 2015, about the contributions of state and local spending on a per-student basis. The dollar figures are adjusted for inflation, so back in the early 1990s state and local spending on higher education was about $8,500 per student, but from 2011-2015 (despite a bump up in the last few years) it has been under $7,000 per student. Meanwhile, average tuition paid per student has more than doubled, from less than $3,000 back in the early 1990s to over $6,000 in 2015.

Putting those two trends together, it's no surprise that the share of public higher spending covered by tuition is rising. Indeed, this figure shows that the share has nearly doubled, from 25% back in 1990 to approaching 50% at present. The report discerns a pattern here: "Net tuition revenue per student tends to increase most rapidly during periods of recession, shifting more of the cost of higher education to students and families. ... During economic recessions, student share increases quickly and a new level is established during periods of recovery. Traditionally, the student share has not
declined significantly as state and local funding has been restored. It is likely that student share will surpass 50 percent during the next economic downturn."

Of course, rising student loans have helped students to pay the higher tuition during the past 25 years. But student loans now total $1.3 trillion, and the dip in higher education enrollments in the last few years (shown by the red line in the first figure) suggests that ever-higher tuition and loans are not the way to expand the effectiveness and enrollments in higher education.


Tuesday, July 19, 2016

How Well Does GDP Measure the Digital Economy?

Digital technologies aren't just changing the way existing companies communicate and keep records, but are creating new kinds of companies (think Uber, AirBnB, or Amazon) and products (think and "free" products like email and websearch or an app like Pokemon Go). Can the old-style methods of measuring GDP keep up? Nadim Ahmad and Paul Schreyer of the OECD tackle this question in "Are GDP and Productivity Measures Up to the Challenges of the Digital Economy?" which appears in teh Spring 2016 issue of International Productivity Monitor, which in turn is published by the Ontario-based Centre for the Study of Living Standards. Perhaps a  little surprisingly, their overall message is upbeat. Here's the abstract:
"Recent years have seen a rapid emergence of disruptive technologies with new forms of
intermediation, service provision and consumption, with digitalization being a common
characteristic. These include new platforms that facilitate peer-to-peer transactions, such
as AirBnB and Uber, new activities such as crowd sourcing, a growing category of the
‘occasional self-employed’ and prevalence of ‘free’ media services, funded by advertising and ‘Big data’. Against a backdrop of slowing rates of measured productivity growth, this has raised questions about the conceptual basis of GDP, and whether current compilation methods are adequate. This article frames the discussion under an umbrella of the Digitalized Economy, covering also statistical challenges where digitalization is a
complicating feature such as the measurement of international transactions and knowledgebased assets. It delineates between conceptual and compilation issues and highlights areas where further investigations are merited. The overall conclusion is that, on balance, the accounting framework for GDP looks to be up to the challenges posed by digitalization. Many practical measurement issues remain, however, in particular concerning price changes and where digitalization meets internationalization."
The article employs a refreshingly down-to-earth strategy: it discusses, one by one, certain kinds of transactions in the digital economy, how the digital economy has altered (or in some cases created) these transactions, and how well they are captured in GDP.

For example, one set of digital economy activities is what the authors cal "intermediation of peer-to-peer services," which is hooking buyers up to sellers through Uber, AirBnB, eBay, new ways of getting loans, and others. The quantity and value of these kinds of web-based transactions has surely risen. But by and large, the value of these transactions are captured pretty well through the recordss of the companies involved. In these areas, one could argue that these underlying economic activities might be better captured as part of the digital economy than it was before. In the past, activities like unlicensed or off-the-meter cab drivers, informal off-the-books rentals, and garage sales were  not well-captured in official economic statistics.

Sure, some tricky issues do arise here. For example, if I use my car as an Uber driver, then my car is no longer solely in the economic category of "durable goods consumption," and now is also in part a form of "business investment." But it also true that people who work from home, in one form or another, have been mixing the "consumption" and "investment" categories for quite some time now.

A different set of issues arises thinking about how the digital economy has enabled consumers to take over certain tasks previously provided by producers. Here's their explanation:
Perhaps the best example is the use of internet search engines or travel websites to book flights and holidays, previously the preserve of a dedicated travel agent. But there are many other examples that merit consideration under this broad umbrella where market production blurs with non-market activity: self-check in at airports, self-service at supermarkets, cash withdrawal machines and on-line banking to name but a few. These innovations have all helped to transform the way consumers engage with businesses and brought with them associated benefits but they also involve greater participation on the part of consumers, and indeed involvement in activities that used to be part of the production process. Because the involvement of the consumer displaces traditional activity, the question is whether this increased ‘displacing’ participation should be included in GDP, one of the main arguments being that GDP would be higher, for
example, when a travel agent acts as an intermediary to conduct the search compared to when the individual conducts the search his/herself.
But of course, this issue isn't new either. GDP has always been about what is actually bought and sold in the economy, not about what might have been bought and sold. There are lots of goods and services for which households have some degree of choice between making or buying: cooking, cleaning, child-care, assembly (say, of new furniture), home maintenance or decorating, transportation various leisure activities, and others. The authors argue that in this broader context, "the scale of ‘digitalized’ participation activities is likely to be significantly less than those for other non-market services outside the production boundary." The usual approach to these activities for government statisticians is to set up "satellite" accounts in addition to GDP that offer estimates of their value, without actually adding them to GDP.

Some of the hardest issues arise in the areas of digitally-based consumer products that are free or subsidized to the consumer, like email, web-search, computer storage space, free software for computers, free apps for smartphones and tablets, and much more. Ahmad and Schreyer point out that "it is important to note that the provision of free services by corporations to households is not a new phenomenon. Households have long been accustomed, for example, to receiving free media services (television and radio) financed implicitly via advertising." Historically, what you pay for a daily newspaper has mostly covered the delivery costs, while the cost of news-gathering and production was supported by advertising. In addition, it has been a fairly standard marketing approach in the past to give away a good or service at a free or reduced price, and in that way to try to encourage buyers to spend more afterwards.

Of course, some puzzles arise here for GDP statisticians. For example, one view is that "the
value of the free service provided to the consumer can be equated with the value of the corresponding
advertising services." Another view considers "the time spent by households watching advertisements as an act of production, for which they are paid by the advertising firm, and in turn pay for the (previously free) services to the service provider." Various complexities arise here, but the differences in thinking about advertising-supported services are not fundamental in nature.

However, greater complications arise when part of the tradeoff for "free" digital services involves information. As the authors point out, the advertising approach to measuring GDP can be applied here, but it's a bit of a conceptual stretch. After all, advertising can be linked in a fairly direct way to the number of eyeballs or clicks, but the contribution that additional information makes in building up an overall database is harder to value:
"The second avenue for the financing of free digital products is collecting and commercially exploiting the vast amounts of data generated by users of digital products. In many ways, this financing model resembles the advertising model: there is an implicit transaction between consumers (who provide data) and producers (who provide digital services for ‘free’ in return). A third party may or may not be involved. Economically speaking, the service provider finances its free services by building up a digital asset (volumes of data) that is subsequently used in the production of data services. ... However (unlike the advertising model) the analogy is slightly more complicated here as there is no obvious proxy to establish the value of the services provided for free."
I won't try to do justice to their entire argument here, but a few other points are worth mentioning. There is a problem of valuing digital public goods, like Wikipedia or Linux. With conventional GDP, it's also difficult to value the 8 billion hours or so of volunteer time that Americans donate each year for other purposes. It seems clear that the value of "knowledge-based assets" is rising in companies, and for workers as well, and measuring the production and consumption of these assets is very hard. Digital transactions that cross international borders may cause ever-greater problems for GDP measurement, as well.

What seem to me the biggest challenges here are some classic issues for GDP statisticians that involve quality. Just to be clear, these issues of quality and variety aren't brand new in the digital economy. Even when just looking at goods, the many gradual improvements in quality can be very hard to capture. When thinking about services, the problem gets worse. When thinking about cost of a "unit" of health care services," or  "unit" of banking service, or  "unit" of legal services, it's quite hard to think about what the "unit" should be. In health care, for example, a day in an hospital, or a specific procedure like a colonoscopy, are quite different in their qualities now than they were a decade or two ago. Having dozens or hundreds of TV channels available is different in quality than than having only a few channels, just as the continual expansion of what is freely available on-line makes use of the internet a different quality experience.

The problems of measuring quality play out in a number of ways. When measuring output, an improvement in quality should should be viewed as a gain in actual real output, but it's not clear that the actual value of what is bought and sold captures that rise in value. An underlying problem here is that when it is hard to measure quality, it is also hard to measure prices and inflation. For example, the price of a day spent in a hospital room has risen dramatically over time. Presumably, some part of this increase is due to higher quality of what service is being provided, so it should be a rise in output. Indeed, perhaps the rise in the cost of a hospital room doesn't capture all of the rise in quality--so the rise in true output is actually bigger than the cost. Or perhaps some of the rise of the cost of the hotel room is just inflation. Economic researchers can make a career of delving into these kinds of issues, and the digital economy means that all the old questions need to be considered in new context.

However, there's one line which shouldn't be crossed. One sometimes hears the argument that the digital economy is understated in the GDP statistics because it doesn't measure the welfare or pleasure that people receive from various digital goods and services. But GDP is a measure of final goods and services bought and sold. GDP isn't welfare. It never has been welfare. To be sure, a high or a rising GDP is often correlated with many positive aspects of life for everyday people, But from the birth of the concept of GDP up to the present, no serious economists has ever argued that GDP is equal to welfare. Ahmad and Schreyer write:
"[I]t is clear that consumer valuation should not attempt to measure total consumer welfare arising from the use of free digital products, just as the value of traditional market products is not a measure of consumer welfare. Measures of the total value of consumer welfare such as consumer surplus are at odds with the conceptual basis of measuring GDP and income, let alone any welfare measure that goes beyond consumption and encompasses quality-of-life dimensions. There is no question about the importance of such measures ... However, measuring production and income is a different objective from measuring welfare." 

Friday, July 15, 2016

The Collapse of California's Carbon Cap-and-Trade Market

Back in 2006, the state of California enacted a law to establish a cap-and-trade market for selling carbon emissions. The market covers about 85% of state carbon emissions. The broad idea was that the state would use a mixture of regulatory rules and the carbon market to cut emissions. Moreover, the state would raise money through regular auctions of "allowances" to emit carbon. But in early 2016, the price of carbon allowances being bought and sold in the secondary market fell below the minimum "price floor" that the state of California would charge for these allowances. Because it was cheaper to buy allowances in the secondary market from those who already owned them than from the state, 90% of the available carbon allowances went unsold in the May 2016 auction, and California received about $880 million less than expected.

Danny Cullenward and Andy Coghlan describe what happened in "Structural oversupply and credibility in California’s carbon market," which appears in The Electricity Journal (June 2016, 29, pp. 7-14). (The article isn't freely available on-line, but many readers can probably obtain access through library subscriptions.) But the broader issue here goes beyond the California auctions and should be a concern to anyone who advocates a cap-and-trade approach to reducing carbon emissions. A couple of years ago, prices for carbon allowances in the European Union carbon trading market also dropped dramatically. Is what went wrong in the California cap-and-trade market for carbon emissions related to what happened in the EU--and do these experiences point to an underlying problem with this approach?

Here are a couple of figures from Cullenward and Coghlin about the recent California experience. The first figure shows how many carbon allowances remained unsold after each auction since 2011. The solid line shows unsold allowances for current carbon emissions. The auction also sold allowances for emissions up to three years in the future, which are shown with the lighter bars. You can see that in the past, most of the allowances were sold, but not the pattern changes in early 2016.

This figure shows prices of carbon allowances in California. The darker line shows the price in the secondary market for buying carbon allowances in California. The shaded area shows the price floor for the state auctions--that is, the state would not accept a lower price than this level. The price in the secondary market had been hovering just above the state price floor for a couple of years, and then in 2016, buying carbon allowances in the secondary market became cheaper than the price floor in the state auction--which is why so many carbon allowances went unsold in the May 2016 auction. The authors describe it this way:
"The history of California’s carbon market can be separated into four phases: (a) an initial speculative trading period prior to the first quarterly auction; (b) an intermediate phase following the launch during which the market regulator informally indicated its goal of relaxing cross-border resource shuffling regulations; (c) a period of stability following the formal adoption of resource shuffling reforms, with secondary prices stable at a small transaction cost above the auction price floor; and (d) a new phase in which government-run auctions fail to sell all available allowances and secondary market prices fall below the auction price floor."
Cullenward and Coghlan point to several main issues for the California carbon market. One issue is that California wasn't just relying on the carbon market to reduce emissions: instead, the state was also enacting an array of standard regulatory rules to reduce carbon emissions. I had not known that even back before the market started, the official plan was for regulatory steps to account for 80% of the reduction in carbon emissions, and the carbon market for only 20%. As the authors write:  "As a result of these design choices, the carbon market’s role in driving climate mitigation and ensuring the economic efficiency across sectors is far less significant than at first it might appear."

In addition, there are always issues that arise in the fine print of how carbon emissions are happening and what counts as a reduction in carbon emissions. In California, one of the issues involved electricity companies that received energy generated from out of state. When the 2006 law went into effect and the carbon market was looming in the near future, many California utilities started signing contracts with their out-of-state suppliers specifying that they were not buying the electricity from coal-burning generators, but only the electricity from natural gas-burning generators, hydroelectric power, and wind or solar. This "resource shuffling" meant that the California utilities could be legally credited with lower carbon emissions, although the actual way that electricity was generated didn't change. The state of California tried to pass various rules to limit this kind of resource shuffling.

Given these issues, a prominent group of California energy economists had forecast back in 2014 that, as Cullenward and Coghlin put it, "the most likely market outcome was a persistent condition in which the supply of compliance instruments (including both allowances and CARB-approved [California Air Resorces Board]  carbon offset credits) would exceed market demand."

One final kicker is that the 2006 legislation had an end-date of 2020, and without new legislation, California apparently cannot plan for its carbon  market to exist after that date. Thus, carbon emitters in California only need to figure out if they are likely to have sufficient allowances to make it through to 2020--and it certainly appears that plenty are available.

The issues with the European Union cap-and-trade market (which I discuss here) are different in details, but broadly similar. If you have a strict regulatory regime which is tamping down carbon emissions, demand will fall for carbon allowances in the market. If you allow carbon emitters to reduce their emissions with various kinds of offsets that involve signing contracts about what will happen in other places, demand will fall for carbon allowances in the market. If the legal and institutional future of the carbon market looks uncertain a few years off in the future, demand will fall for carbon allowances in the market. If a carbon cap-and-trade market is going to function as a way of reducing carbon emissions, the ability of legislators and regulators to manage these kinds of issues needs to be taken into account.

Thursday, July 14, 2016

US Financial Literacy: Distressing and Disempowering

Over the years, I've had disheartening conversations with a number of college students and recent graduates about their personal finances. The main problem isn't student loans. Instead, it's that they managed to run up an extraordinary amount of credit card debt while still a student, and sometimes also managed to borrow funds (or sign a lease) to have a car that was far more nice-looking than they needed. One student, a few years back, had received a check from his parents for next semester's tuition a couple of months in advance, and wanted my advice on how to make a big profits in the stock market in two months. Many more stories like these are probably lurking in the background of the statistics collected by the FINRA Investor Education Foundation in its triennial survey of 25,000 US adults. The results of this National Financial Capability Study have just been published in the report, Financial Capability in the United States 2016

The tail end of the survey has a six-question multiple choice financial literacy survey. You can take the quiz on-line here, if you prefer, but here are the questions and choices. I won't bother giving answers here, but I'll note that the percentage of Americans able to answer four of the first five questions correctly has fallen from 42% when these questions were asked in 2009 to 37% by 2015.

Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? 
  • More than $102
  • Exactly $102 
  • Less than $102 
  • Don’t know 
  • Prefer not to say
Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After 1 year, how much would you be able to buy with the money in this account? 
  • More than today
  • Exactly the same
  • Less than today 
  • Don’t know 
  • Prefer not to say
If interest rates rise, what will typically happen to bond prices?
  • They will rise 
  • They will fall
  • They will stay the same
  • There is no relationship between bond prices and the interest rate
  • Don’t know
  • Prefer not to say
Suppose you owe $1,000 on a loan and the interest rate you are charged is 20% per year compounded annually. If you didn’t pay anything off, at this interest rate, how many years would it take for the amount you owe to double? 
  • Less than 2 years
  • At least 2 years but less than 5 years
  • At least 5 years but less than 10 years
  • At least 10 years
  • Don’t know
  • Prefer not to say 
A 15-year mortgage typically requires higher monthly payments than a 30-year mortgage, but the total interest paid over the life of the loan will be less. 
  • True
  • False
  • Don’t know 
  • Prefer not to say 
Buying a single company’s stock usually provides a safer return than a stock mutual fund.
  • True 
  • False
  • Don’t know
  • Prefer not to say
I'll readily confess that if I were drawing up a short financial literacy test, I might use some different questions. It's not obvious to me that knowing about interest rates and bond prices is important for the average person, for example. But that said, my guess is that any plausible set of questions you draw up will give results similar to these.

But more distressing than answers to quiz questions are the answers that people give to questions about their own personal financial situation. For example:

Only 39% of those surveyed say that they have tried to figure out their retirement saving needs. Only 30% report having some form of non-retirement investments. Even in 2015, 9% of the survey respondents say that what they owe on their home mortgage is more than the current value of the home. When it comes to credit cards, 77% have at least one, 26% have four or more, and only about half pay their bill in full every month. Among those with a student loan, 28% report that did not complete the education for which the loan was taken out. About one-fourth of those who answered the survey used "non-bank borrowing" in 2015 like a pawn shop, a payday loan, a rent-to-own store, or an auto title loan. Forty percent of respondents say they have too much debt right now.

The good news, I suppose, is that at least we feel good about our financial literacy. For example, 60% of respondents believe they have an "above average" credit score and 41% believe their credit score is "very good."  When people were asked assess their own financial knowledge on a scale from 1-7, 67% graded themselves as 5 or higher in 2009, which rose to 76% by 2015.

For perspective, here's a Survey of the States 2016 from the Council for Economic Education on the subject of high school teaching of personal finance. As you can see, 45 states include personal finance in their "standards," which sounds pretty good until you look at the other lines. It falls to 37 states that actually require the standard to be implemented, 22 states that require a personal finance course to be offered, and 7 states that have standardized testing of personal finance concepts.

Here's an earlier post on "Financial Literacy" (March 17, 2014), which uses a three-question version of the above survey and offers some additional thoughts.  

Wednesday, July 13, 2016

What's Driving the Long-Run Deficit Forecasts?

The headline finding from The 2016 Long-Term Budget Outlook just published by the Congressional Budget Office is that the ratio of federal debt/GDP is projected to rise from its current level of 75% in 2016 to 141% in 2046--which would be the highest level ever for the US economy.

As a starting point, the long-run pattern of federal debt-to-GDP looks like this when looking back over US history and then projecting forward 30 years. Previous peaks for federal debt include World War II, World War II, the Civil War, and the Revolutionary War, as well as rises in debt incurred during the 1930s and the 1980s. But the CBO projectionssuggest that US borrowing isn't on a sustainable path.

What driving these estimates? Essentially, the CBO estimate is a status quo projection. It's based on current laws,  combined with existing trends for population (like the aging of the population) and a few other estimates (like interest rates and health care costs). Of course, the report also includes how the estimates would be affected by changes in laws and economic parameters. But for the moment, let's just focus on the central estimates for spending and taxes, which look like this:


As an overall statement, the CBO projects a large rise in the debt-to-GDP ratio because under current law government spending is projected to rise over time as a share of GDP, while taxes are not. In  the major categories of federal spending shown in the top panel, the two categories with the biggest projected rise over the next few decades are major health care programs and net interest payments. The tax projections are, again, a status quo projection of not much change over time, although individual income taxes rise a bit because (under current law) some taxpayers will be bumped into higher tax brackets over time and because in 2020, taxpayers who are receiving  high-cost health insurance from employers are scheduled to start owing some income tax on some of the value of that insurance.

Of course, projections like these are mutable. As Ebenezer Scrooge says to the Spirit of Christmas Future, before he looks at his own gravestone: “`[A]nswer me one question. Are these the shadows of the things that Will be, or are they shadows of things that May be, only? ... Men’s courses will foreshadow certain ends, to which, if persevered in, they must lead,' said Scrooge. `But if the courses be departed from, the ends will change. Say it is thus with what you show me!'"

Net interest payments are essentially determined by two factors: how much the federal government has borrowed, and what interest rate it needs to pay. The CBO estimate is based on the (real, 10-year) interest rate that the federal government needs to pay hanging at 2%--more-or-less its current level. If interest rates keep falling so that the applicable rate was 1% or less, or started rising to be 3% or more the debt forecast moves considerably.

The level of health-care spending, on the other hand, is at least to some extent determined by the size of the government subsidies for health care through Medicare, Medicaid, the Children's Health Insurance Program, the "marketplace" health insurance exchanges, and other methods. For example, a previous CBO study found that the federal subsidies to the "marketplace" health insurance exchanges will be about $110 billion this year. The share of Medicare spending which is covered by either payroll taxes of workers or premiums paid by the elderly keeps falling, so an ever-larger share of the cost of Medicare is covered by general funds.

If health care spending wasn't projected to keep rising, then federal borrowing wouldn't climb as much, and interest costs wouldn't be as much of a problem, either. In that sense, health care spending is at the heart of the distressing forecasts for where federal borrowing is headed in the long-term.  It's not novel to say, but still worth pointing out, that higher health care spending is already crowding out other government   at both the state and federal level. It would be a lot easier to contemplate lasting boosts in spending on education or a cleaner environment or anti-poverty programs if not for the looming specter of rising health care costs.

But in addition, it's useful to think about what the CBO budget forecasts leave out. Past sharp rises in the debt-to-GDP ratio have often been associated with war, or with the aftermath of Great Depression or Great Recession. History suggests a reasonable chance that the next 30 years will bring one or both of these. The future may also bring other public priorities, like dealing with what is likely to be a very large expansion in the population of elderly needing long-term care, or rebuilding America's 20th century transportation, energy, and communication infrastructure for the 21st century.

The overhanging shadow of rising health care costs influences other policy choices, too. Given that health care spending is already projected to drive federal borrowing to unprecedented levels, further expansions of government health care spending seem less appropriate. If raising taxes mainly just funnels more money to government health care spending, it will be be even less attractive. Given the projections of federal borrowing rising to unprecedented levels, state and federal legislators will be especially tempted by regulatory policies that don't impose a direct budgetary cost. The economic and political tradeoffs of high government health care spending are already with us, and are only going to bind more tightly over time.

Tuesday, July 12, 2016

Financial Stability Reform: Lots of Activity, Not Enough Progress

There has been lots of sound and fury about improving financial regulation in the seven years since the Great Recession ended in 2009. But have the necessary changes been made? In "Financial Regulatory Reform Afterthe Crisis: An Assessment," a paper written for the 2016 European Central Bank Forum on Central Banking held at the end of June, Darrell Duffie basically says "not yet."

Duffie argues that there are four core elements of financial-stability regulation: "1. Making financial institutions more resilient. 2. Ending “too-big-to-fail.” 3. Making derivatives markets safer. 4. Transforming shadow banking." He writes: "At this point, only the first of these cores element of the reform, `making financial institutions more resilient,' can be scored a clear success, although even here much more work remains to be done."

On the first goal of making financial institutions more resilient:
"These resiliency reforms, particularly bank capital regulations, have caused some reduction in secondary market liquidity. While bid-ask spreads and most other standard liquidity metrics suggest that markets are about as liquid for small trades as they have been for a long time,4 liquidity is worse for block-sized trade demands. As a tradeoff for significantly greater financial stability, this is a cost well worth bearing. Meanwhile, markets are continuing to slowly adapt to the reduction of balance-sheet space being made available for market making by bank-affiliated dealers. Even more stringent minimum requirements for capital relative to risk-weighted assets would, in my view, offer additional net social benefits.  I will suggest here, however, that the regulation known as the Leverage Ratio has caused a distortionary reduction in the incentives of banks to intermediate markets for safe assets, especially the government securities repo market, without apparent financial stability benefits."
On the second goal of ending "too-big-too-fail":
"At the threat of failure of a systemically important financial firm, a regulator is supposed to be able to administratively restructure the parent firm’s liabilities so as to allow the key operating subsidiaries to continue providing services to the economy without significant or damaging interruption.  For this to be successful, three key necessary conditions are: (i) the parent firm has enough general unsecured liabilities (not including critical operating liabilities such as deposits) that cancelling these “bail-in” liabilities, or converting them to equity, would leave an adequately capitalized firm, (ii) the failure-resolution process does not trigger the early termination of financial contracts on which the firm and its counterparties rely for stability, and (iii) decisive action by regulators. ... [T]he proposed single-point-of-entry method for the failure resolution of systemic financial firms is not yet ready for safe and successful deployment. A key success here, though, is that creditors of banks do appear to have gotten the message that in the future, their claims are much less likely to be bailed out."
On the issue of making derivatives markets safer:
"Derivatives reforms have forced huge amounts of swaps into central counterparties (CCPs), a major success in terms of collateralization and transparency in the swap market. As a result, however, CCPs are now themselves too big to fail. Effective operating plans and procedures for the failure resolution of CCPs have yet to be proposed. While the failure of a large CCP seems a remote possibility, this remoteness is difficult to verify because there is also no generally accepted regulatory framework for conducting CCP stress tests. This represents an undue lack of transparency. Reform of derivatives markets financial-stability regulation has mostly bypassed the market for foreign-exchange derivatives involving the delivery of one currency for another, a huge and systemically important class. Data repositories for the swaps market have not come close to meeting their intended purposes. Here especially, the opportunities of time afforded by the impetus of a severe crisis have not been used well."
On the issue of transforming shadow banking:
"A financial-stability transformation of shadow banking is hampered by the complexity of non-bank financial intermediation and by the patchwork quilt of prudential regulatory coverage of the non-bank financial sector. ... The Financial Stability Board (2015) sets out five classes of shadow-banking entities: 1. Entities susceptible to runs, such as certain mutual funds, credit hedge funds, and real-estate funds. 2. Non-bank lenders dependent on short-term funding, such as finance companies, leasing companies, factoring companies, and consumer credit companies. 3. Market intermediaries dependent on short-term funding or on the secured funding of client assets, such as broker-dealers. 4. Companies facilitating credit creation, such as credit insurance companies, financial guarantors, and monoline insurers. 5. Securitisation-based intermediaries. ... While progress has been made, the infrastructure of the United States securities financing markets is still not safe and sound. The biggest risk is that of a firesale of securities in the event of the inability of a major broker dealer to roll over its securities financing under repurchase agreements. While the intra-day risk that such a failure poses for the two large tri-partyrepo clearing banks has been dramatically reduced, the U.S. still has no broad repo central counterparty with the liquidity resources necessary to prevent such a firesale. More generally, as emphasized by Baklanova, Copeland, and McCaughrin (2016), there is a need for more comprehensive monitoring of all securities financing transactions, including securities lending agreements."
Finally, I was struck by one of Duffie's comments in passing about the costs of financial regulation:
"The costs of implementing and complying with regulation are among the tradeoffs for achieving greater financial stability. For example, in 2013 (even before the full regime of new regulations was in place) the six largest U.S. banks spent an estimated $70.2 billion on regulatory compliance, doubling the $34.7 billion they spent in 2007. Compliance requirements can accelerate or, potentially, decelerate overdue improvements in practices.  The frictional cost of complying with post-crisis regulations is easily exceeded by the total social benefits, but is nevertheless a factor to be considered when designing specific requirements and supervisory regimes."
Appropriate financial regulation is an admittedly difficult policy problem. Still, it's disconcerting that seven years after the end of the Great Recession, some obvious gaps and concerns remain--and of course, the concerns that we haven't been able to anticipate remain as well.

Monday, July 11, 2016

When Technology Alters Jobs, but Doesn't Replace Them

Sometimes technology does nearly eliminate certain categories of jobs: for example, I was watching the 1958 movie Auntie Mame last week, in which the fabulous Rosalind Russell--portraying a character from the early 1930s--has a short comedic take on being a switchboard operator at a law firm. I had to explain to my teenagers what she was doing, and that such a job used to exist. But it is more common for technology to alter jobs, rather than to eliminate them.

Michael Chui, James Manyika, and Mehdi Miremadi have been exploring what jobs are likely to be altered more or less by technology. They present some results in "Where machines could replace humans—and where they can’t (yet)" in the July 2016 issue of the McKinsey Quarterly. They are working with data from the US Department of Labor, through which they have a list of 800 occupations and 2,000 tasks that are performed in the context of those occupations. By estimated which tasks are most likely to be automated, they can figure out which occupations are most likely to be altered substantially by new technology. I'll start here with a quick overview of their findings, and then offer some more nuances thoughts.

The columns of this figure show six activities that are (broadly) involved in many jobs. The rows show job categories. The size of the circles shows what share of time on the job is spent in each activity. The color of the circle shows how easy it is, within that  to automate that activity. Thus, the first row shows that in food service, a large share of time is spent on tasks "predictable physical tasks" that are fairly easy to automate. Indeed, one minor surprise of these findings is that "accommodations and food service" jobs, rather than manufacturing, have the highest technical potential for automation.


Here are a few more detailed insights:

1) Just because part of a job is automated doesn't mean that the number of workers in that job necessarily declines. I posted about a year ago on the example of "ATMs and a Rising Number of Bank Tellers" (March 3, 2015) about how the dramatic rise in automatic teller machines has been accompanied by a rising number of bank tellers--although the job of "bank teller" has also evolved during this time. The McKinsey researchers offer another example. How would the deployment of bar-code scanners affect the number of cashiers? I would have guessed that their number would fall, and I would have been wrong. The authors write:
"Even when machines do take over some human activities in an occupation, this does not necessarily spell the end of the jobs in that line of work. On the contrary, their number at times increases in occupations that have been partly automated, because overall demand for their remaining activities has continued to grow. For example, the large-scale deployment of bar-code scanners and associated point-of-sale systems in the United States in the 1980s reduced labor costs per store by an estimated 4.5 percent and the cost of the groceries consumers bought by 1.4 percent. It also enabled a number of innovations, including increased promotions. But cashiers were still needed; in fact, their employment grew at an average rate of more than 2 percent between 1980 and 2013."
2) In a number of cases the question isn't about whether a certain task can be automated, but whether the task happens in a repetitive and predictable context, or in a flexible context.  They write: "Within manufacturing, 90 percent of what welders, cutters, solderers, and brazers do, for example, has the technical potential for automation, but for customer-service representatives that feasibility is below 30 percent."

3) Automation isn't just about physical jobs that can be automated by robots. A large tasks performed by well-paid white-collar workers that involve collecting and processing data are vulnerable, too.

Across all occupations in the US economy, one-third of the time spent in the workplace involves collecting and processing data. Both activities have a technical potential for automation exceeding 60 percent. Long ago, many companies automated activities such as administering procurement, processing payrolls, calculating material-resource needs, generating invoices, and using bar codes to track flows of materials. But as technology progresses, computers are helping to increase the scale and quality of these activities. For example, a number of companies now offer solutions that automate entering paper and PDF invoices into computer systems or even processing loan applications. And it’s not just entry-level workers or low-wage clerks who collect and process data; people whose annual incomes exceed $200,000 spend some 31 percent of their time doing those things, as well.

4) Just because it's technically feasible for certain tasks to be automated doesn't mean they necessarily will be automated.

"Technical feasibility is a necessary precondition for automation, but not a complete predictor that an activity will be automated. A second factor to consider is the cost of developing and deploying both the hardware and the software for automation. The cost of labor and related supply-and-demand dynamics represent a third factor: if workers are in abundant supply and significantly less expensive than automation, this could be a decisive argument against it. A fourth factor to consider is the benefits beyond labor substitution, including higher levels of output, better quality, and fewer errors. These are often larger than those of reducing labor costs. Regulatory and social-acceptance issues, such as the degree to which machines are acceptable in any particular setting, must also be weighed. A robot may, in theory, be able to replace some of the functions of a nurse, for example. But for now, the prospect that this might actually happen in a highly visible way could prove unpalatable for many patients, who expect human contact. The potential for automation to take hold in a sector or occupation reflects a subtle interplay between these factors and the trade-offs among them."
My own job as Managing Editor of the Journal of Economic Perspectives has been dramatically affected by technology over the years. When the journal first started in 1986, we had what was then a very innovative idea: authors would mail us floppy disks with the text of their papers. I would edit the actual paper, and mail it back to the authors to edit further. We would then mail the paper to the typesetter on the floppy disk. At the time, this was red-hot newfangled stuff! The task of hands-on editing remains very similar to 30 years ago, but there are lots of dramatic changes. The ways in which we communicate with authors have been fundamentally changed by email, attachments, shared mailboxes on the cloud, and easy conference calls. The tasks of looking up past articles and checking references used to require trips to the library, and are now done casually without leaving my desk. The distribution of the journal used to be all-paper, and then available online by subscription, and then with individual articles freely available online, and now with entire issues that can be freely downloaded and read on a tablet or a smartphone.

Most jobs will be altered by technology. And most of us find that even as technology replaces certain tasks, it creates the possibilities for new tasks that could not previously be done--or at least couldn't be done very cheaply or easily. This continual updating of jobs is one of the prices we pay for prosperity.


Friday, July 8, 2016

Carbon Capture and Storage: No Stone Unturned

Technologies for carbon capture and storage often don't garner much political support. Those who think rising levels of carbon in the atmosphere aren't much of a problem see little purpose for investments in technology to capture that carbon. Many of those who do think rising carbon emissions are a problem are emotionally wedded to a particular solution--reducing use of fossil fuels and growth of solar and wind power, combined with better batteries--and they sometimes view carbon capture and storage as an excuse to continue the use of fossil fuels. My own belief is that the risks of climate change (and other environmental costs of fossil fuel use) aren't likely to have one silver-bullet answer, and that all options are worth research and exploration, including not just non-carbon and low-carbon energy sources, but also energy conservation efforts and geoengineering, along with carbon capture and storage.

Back in 2005, the Intergovernmental Panel on Climate Change published one of its doorstop tomes called Carbon Dioxide Capture and Storage, summarizing what what known at the time. Here's a sense of the tone of the report, emphasizing both the potential of carbon capture and storage (CCS) and the uncertainties about realizing that potential (footnotes deleted for readability):

In most scenarios for stabilization of atmospheric greenhouse gas concentrations between 450 and 750 ppmv COand in a least-cost portfolio of mitigation options, the economic potential of CCS would amount to 220– 2,200 GtCO2 (60–600 GtC) cumulatively, which would mean that CCS contributes 15–55% to the cumulative mitigation effort worldwide until 2100, averaged over a range of baseline scenarios. It is likely that the technical potential for geological storage is sufficient to cover the high end of the economic potential range, but for specific regions, this may not be true. Uncertainties in these economic potential estimates are significant. For CCS to achieve such an economic potential, several hundreds to thousands of COcapture systems would need to be installed over the coming century, each capturing some 1–5 MtCOper year. The actual implementation of CCS, as for other mitigation options, is likely to be lower than the economic potential due to factors such as environmental impacts, risks of leakage and the lack of a clear legal framework or public acceptance ...
How has CCS evolved since then? The underlying idea here is to consider installing carbon capture technology at industrial or other facilities which use a lot of fossil fuels and where carbon emissions are especially high. The technology doesn't eliminate such industrial emissions, but holds some promise for reducing them substantially. The more recent estimates for potential of CCS seem to be at the very low end of what the IPCC discussed back in 2005. For example, the International Energy Agency produced a 2015 report called Carbon Capture and Storage:The solution for deep emissions reductions. When the title refers to "the solution," it dramatically oversells the actual content of the report. The conclusions are much  more measured, focusing on CCS as a contributor to reducing carbon emissions in specific industrial settings that lack cost-effective alternatives to fossil fuels:
According to International Energy Agency (IEA) modelling, CCS could deliver 13% of the cumulative emissions reductions needed by 2050 to limit the global increase in temperature to 2°C (IEA 2DS). This represents the capture and storage of around 6 billion tonnes (Bt) of CO2 emissions per year in 2050, nearly triple India’s energy sector emissions today. Half of this captured CO2 in the 2DS would come from industrial sectors, where there are currently limited or no alternatives for achieving deep emission reductions. While there are alternatives to CCS in power generation, delaying or abandoning CCS in the sector would increase the investment required by 40% or more in the 2DS, and may place untenable and unrealistic demands on other low emission technology options.
The Global CCS Institute keeps track of the projects that are actually underway and offers a summary in its report The Global Status of CCS 2015. The report counts seven large-scale CCS projects (that is, not counting pilot or research-level projects) operating globally in 2010, 15 operating in 2016, and 22 expected to be operating by 2020. For example, one of the large-scale projects in 2015 is the Quest project being operated by Shell Oil in Canada. As the report notes: "Launched in Alberta, Canada in November 2015, the Quest project is capable of capturing approximately 1 Mtpa of CO2 from the manufacture of hydrogen for upgrading bitumen into synthetic crude oil. Quest is the first large-scale CCS project in North America to store CO2 exclusively in a deep saline formation, and the first to do so globally since the Snøhvit CO2 Storage Project became operational in Norway in 2008. A case study prepared by Shell documenting key learnings from the development of Quest is available here." Saudi Arabia also started operating a large-scale CCS project, the first one in the Middle East region, in mid-2015.

Although the biggest effect of CCS technology in the near-term is likely to be focused on these kinds of industrial applications, there's also an intriguing possibility that it can do more through what has become known as BECCS--that is, Bio-Energy and Carbon Capture and Storage. Imagine an energy-generating facility with CCS technology that burns biomass--that is, fuel developed from waste materials produced by forestry, agriculture, and perhaps other sources. Biomass is a renewable resource: in effect, it captures carbon from the atmosphere. If that carbon is captured and stored, and then more biomass is created, and the carbon from that biomass is captured and stored in turn, and so on--the result is a source of energy with negative overall carbon emissions. For discussion, here's a boosterish 2012 report called Biomass with CO2 Capture and Storage (Bio-CCS): The Way Forward for Europe, produced on behalf of the European Biofuels Technology Platform and the Zero Emissions Platform. The IPCC has viewed this possibility as worth mentioning, too: for example, its 5th Assessment report in 2014 has comments like: "“Many models could not limit likely warming to below 2°C if bioenergy, CCS and their combination (BECCS) are limited.”

The large-scale CCS projects now underway will tell us a lot about the costs and effectiveness of the technology in reducing carbon emissions in the next few years. If the feedback seems favorable, then bio-energy with CCS is a likely next step.

Thursday, July 7, 2016

Chronic Student Absenteeism

The US Department of Education is starting this summer to release detailed results from a survey effort called  Civil Rights Data Collection, which collected an array of data from almost every public school in the country for the 2013-14 school year. One result is a short e-report on "Chronic Absenteeism in the Nation's Schools." The report defines "chronic absenteeism" as missing at least 15 school days in a given year. Nationally, about one in eight students are chronically absent. But for non-white, non-Asian high school students, the average rates of chronic absenteeism are above 20 percent. Here's a figure showing rates of chronic absenteeism by race/ethnicity and by level of school.


Chronic absenteeism is not surprisingly associated with lower school performance, both for the individual student and for schools where these rates are especially high. Of course, this correlation doesn't mean that being absent, by itself, is the main causal factor. One suspects that students and schools with high rates of chronic absenteeism face a lot of other issues, and that absenteeism is a symptom of those broader issues. In that sense, chronic student absenteeism is a marker for a set of problems that K-12 schools face, where the school itself can't directly do much about the underlying causes of many of those problems.