The Tax Policy Center’s latest research report went viral last week, drawing attention in the presidential campaign and sparking a constructive discussion of the practical challenges of tax reform. Unfortunately, the response has also included some unwarranted inferences from one side and unwarranted vitriol from the other, distracting from the fundamental message of the study: tax reform is hard.
The paper, authored by Sam Brown, Bill Gale, and Adam Looney, examines the challenges policymakers face in designing a revenue-neutral income tax reform. The paper illustrates the importance of the tradeoffs among revenue, tax rates, and progressivity for the tax policies put forward by presidential candidate Mitt Romney. It found, subject to certain assumptions I discuss below, that any revenue-neutral plan along the lines Governor Romney has outlined would reduce taxes for high-income households, requiring higher taxes on middle- or low-income households. I doubt that’s his intent, but it is an implication of what we can tell about his plan so far. (We look forward to updating our analysis, of course, if and when Governor Romney provides more details.)
The paper is the latest in a series of TPC studies that have documented both the promise and the difficulty of base-broadening, rate-lowering tax reform. Last month, for example, Hang Nguyen, Jim Nunns, Eric Toder, and Roberton Williams documented just how hard it can be to cut tax preferences to pay for lower tax rates. An earlier paper by Dan Baneman and Eric Toder documented the distributional impacts of individual income tax preferences.
The new study applies those insights to Governor Romney’s tax proposal. To do so, the authors had to confront a fundamental challenge: Governor Romney has not offered a fully-specified plan. He has been explicit about the tax cuts he has in mind, including a one-fifth reduction in marginal tax rates from today’s level, which would drop the top rate from 35 percent to 28 percent and a cut in capital gains and dividend taxes for families with incomes below $200,000. He and his team have also said that reform should be revenue-neutral and not increase taxes on capital gains and dividends. But they have not provided any detail about what tax preferences they would cut to make up lost revenue.
As a political matter, such reticence is understandable. To sell yourself and your policy, it’s natural to emphasize the things that people like, such as tax cuts, while downplaying the specifics of who will bear the accompanying costs. Last February, President Obama did the same thing when he rolled out his business tax proposal. The president was very clear about lowering the corporate rate from 35 percent to 28 percent, but he provided few examples of the tax breaks he would cut to pay for it. Such is politics.
For those of us in the business of policy analysis, however, this poses a challenge. TPC’s goal is to inform the tax policy debate as best we can. While we strongly prefer to analyze complete plans, that sometimes isn’t possible. So we provide what information we can with the resources available. Earlier this year, for example, we analyzed the specified parts of Governor Romney’s proposal and documented how much revenue he would have to make up by unspecified base broadening (or, possibly, macroeconomic growth) and how the rate cuts would affect households at different income levels.
The latest study asked a different question: Could Romney’s plan maintain current progressivity given revenue neutrality and reasonable assumptions about what types of base broadening he’d propose? There are roughly $1.3 trillion in tax expenditures out there, but not all will be on Governor Romney’s list. He has said, for example, that raising capital gains and dividend taxes isn’t an option and has generally spoken about lowering taxes on saving and investment. Based on those statements, the authors considered what would happen if Romney kept all the tax breaks associated with saving and investment, including not only the lower rates on capital gains and dividends, but also the special treatment for municipal bonds, IRA and 401ks, and certain life-insurance plans, as well as the ability to avoid capital gains taxes at death (known as step-up basis). The authors also recognized that touching some tax breaks is beyond the realm of political possibility, such as taxing the implicit rent people get from owning their own home.
Given those factors, the study then examined the most progressive way of reducing the other tax breaks that remain on the table—i.e. it rolls them back first for high-income people. But there aren’t enough of those preferences to offset the benefits that high-income households get from the rate reductions. As a result, a revenue-neutral reform within these constraints would cut taxes at the high-end while raising them in the middle and perhaps bottom.
What should we infer from this result? Like Howard Gleckman, I don’t interpret this as evidence that Governor Romney wants to increase taxes on the middle class in order to cut taxes for the rich, as an Obama campaign ad claimed. Instead, I view it as showing that his plan can’t accomplish all his stated objectives. One can charitably view his plan as a combination of political signaling and the opening offer in what would, if he gets elected, become a negotiation.
To get a sense of where such negotiation might lead, keep in mind that Romney’s plan is not the first to propose a 28 percent top rate. The Tax Reform Act of 1986 did, as did the Bowles-Simpson proposal and the similar Domenici-Rivlin effort (on which I served). Unlike Governor Romney’s proposal, all three of those tax reforms reflect political compromise. And in all three cases, part of that compromise was eliminating some tax preferences for saving and investment, which tend to be especially important for high-income taxpayers. In particular, all three reforms resulted in capital gains and dividends being taxed at ordinary income tax rates.
TPC’s latest study highlights the realities that lead to such compromises.
My recent post on government size prompted several readers to ask a natural follow-up question: how has the government’s role as employer changed over time?
To answer, the following chart shows federal, state, and local employment as a share of overall U.S. payrolls:
In July, governments accounted for 16.5 percent of U.S. employment. That’s down from the 17.7 percent peak in early 2010, when the weak economy, stimulus efforts, and the decennial census all boosted government’s share of employment. And it’s down from the levels of much of the past forty years.
On the other hand, it’s also up from the sub-16 percent level reached back in the go-go days of the late 1990s and early 2000s.
Employment thus tells a similar story to government spending on goods and services: if we set the late 1990s to one side, federal, state, and local governments aren’t large by historical standards; indeed, they are somewhat smaller than over most of the past few decades. And they’ve clearly shrunk, in relative terms, over the past couple of years. (But, as noted in my earlier post, overall government spending has grown because of the increase in transfer programs.)
P.S. Like my previous chart on government spending, this one focuses on the size of government relative to the rest of the economy (here measured by nonfarm payroll employment). Over at the Brookings Institution’s Hamilton Project, Michael Greenstone and Adam Looney find a more severe drop in government employment than does my chart. The reason is that they focus on government employment as a share of the population, while my chart compares it to overall employment. That’s an important distinction given the dramatic decline in employment, relative to the population, in recent years.
P.P.S. As Ernie Tedeschi notes, this measure doesn’t capture government contractors. So any change in the mix of private contractors vs. direct employees will affect the ratio. This is another reason why focusing on spending metrics may be better than employment figures.
Politicians and pundits constantly debate the size of government. Is it big or small? Growing or shrinking?
You might hope these simple questions have simple answers. But they don’t. Measuring government size is not as easy as it sounds. For example, official statistics track two different measures of government spending. And those measures tell different stories:
The blue line in the chart above shows how much federal, state, and local governments directly contribute to economic activity, measured as a share of overall gross domestic product (GDP). If you’ve ever taken an intro economics class, you know that contribution as G, shorthand for government spending. G represents all the goods and services that governments provide, valued at the cost of producing them. G thus includes everything from buying aircraft carriers to paying teachers to housing our ambassador in Zambia.
At 19.5 percent of GDP, G is down from the 21.5 percent it hit in the worst days of the Great Recession. As Catherine Rampell of the New York Times pointed out last week, it’s also below the 20.3 percent average of the available data back to 1947. For most of the past 65 years, federal, state, and local governments had a larger direct economic role producing goods and services than they do today.
There’s one notable exception: today’s government consumption and investment spending is notably larger than it was during the economic boom and fiscal restraint of the late 1990s and early 2000s. From mid-1996 to mid-2001, government accounted for less than 18 percent of GDP. Relative to that benchmark, government is now noticeably larger.
The orange line shows a broader measure that captures all the spending in government budgets—all of G plus much more. Governments pay interest on their debts. More important, they make transfer payments through programs like Social Security, Medicare, Medicaid, food stamps, unemployment insurance, and housing vouchers. Transfer spending does not directly contribute to GDP and thus is not part of G. Instead, it provides economic resources to people (and some businesses) that then show up in other GDP components such as consumer spending and private investment.
This broader measure of government spending is much larger than G alone. In 2011, for example, government spending totaled $5.6 trillion, about 37 percent of GDP. But only $3.1 trillion (20 percent of GDP) went for goods and services. The other $2.5 trillion (17 percent) covered transfers and interest.
Like G, this broader measure of government has declined since the (official) end of the Great Recession. Since peaking at 39 percent in the second quarter of 2009, it has fallen to 36 percent in the second quarter of 2012.
Also like G, this measure has grown since the boom of the late 1990s and early 2000s. In the middle of 2000, government spending totaled just 30 percent of GDP, a full 6 percentage points less than today.
The two measures thus agree on recent history: government has shrunk over the past three years as the economy has slowly recovered from the Great Recession and government policy responses have faded. But government spending is still notably larger than at the turn of the century.
The story changes, however, if we look further back in time. Although governments spent more on goods and services in the past, total spending was almost always lower. Since 1960, when data on the broader measure begin, total government spending has averaged about 32 percent. It never reached today’s 36 percent until 2008, when the financial crisis began in earnest.
Much of the recent increase in overall spending is due to the severity of the downturn. But that’s not the only factor. Government’s economic role has changed. As recently as the early 1960s, federal, state, and local governments devoted most of their efforts to providing public goods and services. Now they devote large portions of their budgets to helping people through cash and in-kind transfers—programs like Medicare and Medicaid that were created in 1965 and account for much of the growth in the gap between the orange and blue lines.
Government thus has gotten bigger. But it’s also gotten smaller. It all depends on the time period you consider and the measure you use.
P.S. Keep in mind that this discussion focuses on a relative measure of government size—the ratio of government spending to the overall economy—not an absolute one. Government thus expands if government spending grows faster than the economy and contracts if the reverse is true.
P.P.S. Measuring government size poses other challenges. Eric Toder and I discuss several in our paper “How Big is the Federal Government?” Perhaps most important is that governments now do a great deal of spending through the tax code. Traditional spending numbers thus don’t fully reflect the size or trend in government spending. For more, see this earlier post.
The economy grew at a tepid 1.5% annual rate in the second quarter, according to the latest BEA estimates. That’s far below the pace we need to reduce unemployment.
Weak growth was driven by a slowdown in consumer spending and continued cuts in government spending (mostly at the state and local level), which overshadowed rapid growth in investment spending on housing–yes, housing–and equipment and software:
Housing investment expanded at almost a 10% rate in the second quarter, its fifth straight quarter of growth. Government spending declined at a 1.4% rate, its eighth straight quarter of decline.
Does your brain freeze when offered too many options? Do you put off repainting your bathroom because you can’t bear to select among fifty shades of white (or, for the more adventurous, grey)?
If so, take heart. A famous experiment by psychologists Mark Lepper and Sheena Iyengar, published in 2000, suggests that you are not alone. In supermarket tests, they documented what’s known as the Paradox of Choice. Customers offered an array of six new jam varieties were much more likely to buy one than those offered a choice of 24.
That makes no sense in the narrow sense of rationality often used in simple economic models. More choice should always lead to more sales, since the odds are greater that a shopper will find something they want. But it didn’t. On those days, in those supermarkets, with those jams, more choice meant less buying.
This result resonates with many people. I certainly behave that way occasionally. With limited time and cognitive energy, I sometimes avoid or defer choices that I don’t absolutely need to make … like buying a new jam. Making decisions is hard. Just as consumers have financial budget constraints, so too do we have decision-making budget constraints.
Today’s TED Blog provides links and, naturally, videos for a series of studies documenting similar challenges of choice, from retirement planning to health care to spaghetti sauce. All well worth a view.
But how general are these results? Perhaps not as much as we’d think from the TED talks. A few years ago, Tim Harford, the Financial Times’ Undercover Economist, noted that some subsequent studies in the jam tradition failed to find this effect:
It is hard to find much evidence that retailers are ferociously simplifying their offerings in an effort to boost sales. Starbucks boasts about its “87,000 drink combinations”; supermarkets are packed with options. This suggests that “choice demotivates” is not a universal human truth, but an effect that emerges under special circumstances.
Benjamin Scheibehenne, a psychologist at the University of Basel, was thinking along these lines when he decided (with Peter Todd and, later, Rainer Greifeneder) to design a range of experiments to figure out when choice demotivates, and when it does not.
But a curious thing happened almost immediately. They began by trying to replicate some classic experiments – such as the jam study, and a similar one with luxury chocolates. They couldn’t find any sign of the “choice is bad” effect. Neither the original Lepper-Iyengar experiments nor the new study appears to be at fault: the results are just different and we don’t know why.
After designing 10 different experiments in which participants were asked to make a choice, and finding very little evidence that variety caused any problems, Scheibehenne and his colleagues tried to assemble all the studies, published and unpublished, of the effect.
The average of all these studies suggests that offering lots of extra choices seems to make no important difference either way. There seem to be circumstances where choice is counterproductive but, despite looking hard for them, we don’t yet know much about what they are. Overall, says Scheibehenne: “If you did one of these studies tomorrow, the most probable result would be no effect.”
In short, the Paradox of Choice is experiencing the infamous Decline Effect. As Jonah Lehrer noted in the New Yorker in late 2010, sometimes what seems to be scientific truth “wears off” over time. And not just in “soft” sciences like the intersection of psychology and economics, but in biology and medicine as well.
Some of that decline reflects selection pressures in research and publishing … and invitations to give TED talks. It’s easy to get a paper published if it documents a new a paradox or anomaly. Only after that claim has gained some mindshare does the marketplace then open to research showing null results of no paradox.
That makes for better policy than a tyranny of lawyers alone. But it certainly isn’t enough. Policy is ultimately about changing the way people behave. And to do that, you need to understand more than just economics (as an increasing number of economists, Thaler foremost among them, already recognize).
Thaler thus makes two important suggestions: First, he argues that behavioral scientists deserve a greater formal role in the policy process, perhaps even a Council of Behavioral Science Advisers that would advise the White House in parallel with the Council of Economic Advisers. Second, he urges government to engage in more experimentation so it can learn just what policy choices best drive behavior, and how.
As its name implies, the team (which he advises) works with government agencies to explore how behavioral insights can make policy more effective. Tax compliance is one example.
Each year, Britain sends letters to certain taxpayers—primarily small businesses and individuals with non-wage income—directing them to make appropriate tax payments within six weeks. If they fail to do so, the government follows up with more costly measures. Enter the Behavioral Insights Team:
The tax collection authority wondered whether this letter might be improved. Indeed, it could.
People are more likely to comply with a social norm if they know that most other people comply, Mr. Cialdini has found. (Seeing other dog owners carrying plastic bags encourages others to do so as well.) This insight suggests that adding a statement to the letter that a vast majority of taxpayers pay their taxes on time could encourage others to comply. Studies showed that it would be even better to cite local data, too
Letters using various messages were sent to 140,000 taxpayers in a randomized trial. As the theory predicted, referring to the social norm of a particular area (perhaps, “9 out of 10 people in Exeter pay their taxes on time”) gave the best results: a 15-percentage-point increase in the number of people who paid before the six-week deadline, compared with results from the old-style letter, which was used as a control condition.
Rewriting the letter thus materially improved tax compliance. That’s an important insight, and I hope it scales if and when Britain’s tax authority applies it more broadly.
But there’s a second lesson as well: the benefit of running policy experiments. Policymakers have no lack for theories about how people will respond to various policy changes. What they often do lack, however, is evidence about which theory is correct or how big the potential effects are. Governments on both sides of the Atlantic should look for opportunities to run such controlled experiments so that, to paraphrase Thaler, evidence-based policies can be based on actual evidence.
Another weak jobs report with payrolls up only 80,000, unemployment stuck at 8.2 percent, and underemployment ticking up to 14.9 percent.
But the real news continues to be how far employment has fallen. As recently as 2006, more than 63 percent of adults had a job. Today, that figure is less than 59 percent.
With the exception of the past several years, you’ve got to go back almost three decades to find the last time that so few Americans were employed (as a share of the adult population).
The stunning decline in the employment-to-population ratio (epop to its friends) reflects two related factors. First, the unemployment rate has increased from less than 5 percent to more than 8 percent. That accounts for roughly half the fall in epop. The other half reflects lower labor force participation. Slightly more than 66 percent of adults were in the labor force back them, but now it’s less than 64 percent.
The fun economics story of the day is that Orbitz sometimes looks at your computer’s operating system to decide what hotel options to show you. Dana Mattioli breaks the story over at the Wall Street Journal:
Orbitz Worldwide Inc. has found that people who use Apple Inc.’s Mac computers spend as much as 30% more a night on hotels, so the online travel agency is starting to show them different, and sometimes costlier, travel options than Windows visitors see.
The Orbitz effort, which is in its early stages, demonstrates how tracking people’s online activities can use even seemingly innocuous information—in this case, the fact that customers are visiting Orbitz.com from a Mac—to start predicting their tastes and spending habits.
Orbitz executives confirmed that the company is experimenting with showing different hotel offers to Mac and PC visitors, but said the company isn’t showing the same room to different users at different prices. They also pointed out that users can opt to rank results by price.
The WSJ emphasizes that Mac users see higher-priced hotels. For example, Mattioli’s article is headlined: “On Orbitz, Mac Users Steered to Pricier Hotels.”
My question: Would you feel any different if, instead, the WSJ emphasized that Windows users are directed to lower-priced hotels? For example, Windows users are prompted about the affordable lodgings at the Travelodge in El Paso, Texas. (Full disclosure: I think I once stayed there.)
As Mattioli notes, it’s important to keep in mind that Orbitz isn’t offering different prices, it’s just deciding which hotels to list prominently. And your operating system is just one of many factors that go into this calculation. Others include deals (hotels offering deals move up the rankings), referring site (which can reveal a lot about your preferences), return visits (Orbitz learns your tastes), and location (folks from Greenwich, CT probably see more expensive hotels than those from El Paso).
Financial repression and extractive institutions are two of the big memes in international economics today.
Financial repression occurs when governments intervene in financial markets to channel cheap funds to themselves. With sovereign debts skyrocketing, for example, governments may try to force their citizens, banks, and others to finance those debts at artificially low interest rates.
Extractive institutions are policies that attempt to redirect resources to politically-favored elites. Classic examples are the artificial monopolies often granted by governments in what would otherwise be structurally competitive markets. Daron Acemoglu and James Robinson have recently argued that such institutions are a key reason Why Nations Fail. Inclusive institutions, in contrast, promote widely-shared prosperity.
Over at Bronte Capital, John Hempton brings these two ideas together in an argument that Chinese elites are using financial repression to extract wealth from state-owned enterprises. In a nutshell, he believes Chinese authorities have artificially lowered the interest rates that regular Chinese citizens earn on their savings (that’s the repression), and have directed these cheap funds to finance “staggeringly unprofitable” state enterprises that nonetheless manage to spin out vast wealth for connected elites and their families.
Twenty years ago, world leaders gathered in Rio de Janeiro to grapple with climate change, biological diversity, and other environmental challenges. Today they are back again, but with much less fanfare. If my Twitter feed is any indication, Rio+20 is getting much less attention that the original Earth Summit.
One item that deserves attention is greater emphasis on getting business involved in protecting the environment. For example, two dozen leading businesses–from Alcoa to Xerox–teamed up with The Nature Conservancy on a vision for The New Business Imperative: Valuing Natural Capital (interactive, pdf).
The report lays out the business case that natural resources have real economic value, even if they aren’t traded in markets, and that protecting them can sometimes reduce costs, maintain supplies, soften the blow of future regulation, and build goodwill with customers, communities, and workers. All kind of obvious, at one level, but nonetheless useful to see in print with examples and commitments.
One item that caught my eye is the potential for “green” infrastructure to replace “gray”:
Strong, reliable manmade (“gray”) infrastructure undergirds a healthy marketplace, and most companies depend heavily on it to operate effectively and efficiently. Yet increasingly, companies are seeing the enormous potential for “natural infrastructure” in the form of wetlands and forests, watersheds and coastal habitats to perform many of the same tasks as gray infrastructure — sometimes better and more cheaply.
For instance, investing in protection of coral reefs and mangroves can provide a stronger barrier to protect coastal operations against flooding and storm surge during extreme weather, while inland flooding can be reduced by strategic investments in catchment forests, vegetation and marshes. Forests are also crucial for maintaining usable freshwater sources, as well as for naturally regulating water flow.
Putting funds into maintaining a wetland near a processing or manufacturing plant can be a more cost- effective way of meeting regulatory requirements than building a wastewater treatment facility, as evidenced by the Dow Chemical Seadrift, Texas facility, where a 110-acre constructed wetland provides tertiary wastewater treatment of five million gallons a day. While the cost of a traditional “gray”treatment installation averages >$40 million, Dow’s up-front costs were just $1.4 million.
For companies reliant on agricultural systems, improved land management of forests and ecosystems along field edges and streams, along with the introduction of more diversified and resilient sustainable agriculture systems, can minimize dependency on external inputs like artificial fertilizers, pesticides and blue irrigation water.
To encourage such investments, where they make sense, lawmakers and regulators need to focus on performance–is the wastewater getting clean?–rather than the use of specific technologies or construction.