Donald Marron
Carmine Sodora (L) of Tavern Tax prepares Peter Marafioti's income tax documents at Duffy's tavern in Hoboken, New Jersey in this April 2008 file photo. Workers bear an estimated 20 percent of the corporate income tax, Marron writes. (Joshua Lott/Reuters/File)
Workers bear the corporate income tax burden
Corporations pay income taxes in an administrative sense: they write checks (or send electrons) to the IRS. But corporations can’t actually bear the burden – they are just legal entities, not living and breathing human beings.
So who ultimately bears the burden of corporate income taxes? Shareholders? Employees? Customers?
Economists have struggled with this question for decades. When Mick Jagger dropped out of the London School of Economics in the 1960s, for example, he allegedly complained that “economists can’t even tell if corporations pay taxes or pass them on.”
We’ve made some progress since then. Over at the Tax Policy Center, my colleague Jim Nunns summarizes what economists have learned over the past five decades and describes TPC’s new approach to distributing the corporate income tax.
As Jim reports, our best estimate is that workers bear 20 percent of the corporate income tax, shareholders bear 60 percent, and investors as a whole bear 20 percent. [Editor's note: This paragraph and a sentence two paragraphs down have been corrected. In the original, two figures were transposed.]
Workers bear some of the corporate income tax because capital can move around the world. All else equal, the corporate income tax encourages some capital to locate abroad rather than in the United States. That reduces worker productivity (since they have less capital with which to work) and thus reduces worker wages and benefits. As a result, some of the corporate tax burden falls on workers.
Investors in general bear a portion of the corporate income tax for a similar reason. When you tax corporations, you encourage capital to flow out of corporate equities and into other investments, including corporate debt and non-corporate businesses. That flow reduces the rates of return that investors earn in those other asset classes as well. Much of the corporate income tax thus gets passed on to investors in general, not just corporate shareholders.
Shareholders alone, finally, bear the portion of the corporate income tax that falls on “super-normal returns” — i.e., the returns they get in excess of a normal rate of return.
If any readers know Mick Jagger, please send him a link to the study. Maybe it will finally give him some satisfaction.
P.S. For another overview, see this TaxVox post by Howard Gleckman.
A woman counts her US dollar bills at a money changer in Jakarta in this June 2012 file photo. According to a recent report, most US investors lack basic financial literacy, including an understanding of compound interest, basic risks, and fees. (Beawiharta/Reuters/File)
Investors lack basic financial literacy, study finds
“American investors lack basic financial literacy,” according to a new report from the Securities and Exchange Commission (much of which is based on an earlier report by the Congressional Research Service at the Library of Congress). Many fail to grasp compound interest, don’t understand fees and other investment costs, and aren’t aware about the risks of investment fraud.
From the report summary:
According to the Library of Congress Report, studies show consistently that American investors lack basic financial literacy. For example, studies have found that investors do not understand the most elementary financial concepts, such as compound interest and inflation. Studies have also found that many investors do not understand other key financial concepts, such as diversification or the differences between stocks and bonds, and are not fully aware of investment costs and their impact on investment returns. Moreover, based on studies cited in the Library of Congress Report, investors lack critical knowledge about investment fraud. In addition, surveys demonstrate that certain subgroups, including women, African-Americans, Hispanics, the oldest segment of the elderly population, and those who are poorly educated, have an even greater lack of investment knowledge than the average general population. The Library of Congress Report concludes that “low levels of investor literacy have serious implications for the ability of broad segments of the population to retire comfortably, particularly in an age dominated by defined-contribution retirement plans.”
The report goes on to discuss ideas for increasing financial literacy and increasing the transparency of fees and other investment costs.
People sometimes talk about financial literacy as though the goal is helping people choose their own investments. That can be helpful, but the report rightly discusses another goal: improving consumers’ ability to work with financial advisers.
According to Marron, economic data should flag potential issues or anomalies with numbers, as seen here. Click on the number, and an 'explainer' of a certain data point would pop up. (Donald Marron, Bureau of Labor Statistics (BLS))
Economic data goofs make the case for metadata
Harvard historian Niall Ferguson goofed on Bloomberg TV yesterday. Arguing that the 2009 stimulus had little effect, he said:
The point I made in the piece [his controversial cover story in Newsweek] was that the stimulus had a very short-term effect, which is very clear if you look, for example, at the federal employment numbers. There’s a huge spike in early 2010, and then it falls back down. (This is slightly edited from the transcription by Invictus at The Big Picture.)
That spike did happen. But as every economic data jockey knows, it doesn’t reflect the stimulus; it’s temporary hiring of Census workers.
Ferguson ought to know that. He’s trying to position himself as an important economic commentator and that should require basic familiarity with key data.
But Ferguson is just the tip of the iceberg. For every prominent pundit, there are thousands of other people—students, business analysts, congressional staffers, and interested citizens—who use these data and sometimes make the same mistakes. I’m sure I do as well—it’s hard to know every relevant anomaly in the data. As I said in one of my first blog posts back in 2009:
Data rarely speak for themselves. There’s almost always some folklore, known to initiates, about how data should and should not be used. As the web transforms the availability and use of data, it’s essential that the folklore be democratized as much as the raw data themselves.
How would that democratization work? One approach would be to create metadata for key economic data series. Just as your camera attachs time, date, GPS coordinates, and who knows what else to each digital photograph you take, so could each economic data point be accompanied by a field identifying any special issues and providing a link for users who want more information.
When Niall Ferguson calls up a chart of federal employment statistics at his favorite data provider, such metadata would allow them to display something like this:
Clicking on or hovering over the “2″ would then reveal text: “Federal employment boosted by temporary Census hiring; for more information see link.” And the stimulus mistake would be avoided.
I am, of course, skimming over a host of practical challenges. How do you decide which anomalies should be included in the metadata? When should charts show a single flag for metadata issues, even when the underlying data have it for each affected datapoint?
And, perhaps most important, who should do this? It would be great if the statistical agencies could do it, so the information could filter out through the entire data-using community. But their budgets are already tight. Failing that, perhaps the fine folks at FRED could do it; they’ve certainly revolutionized access to the raw data. Or even Google, which already does something similar to highlight news stories on its stock price charts, but would need to create the underlying database of metadata.
Here’s hoping that someone will do it. Democratizing data folklore would reduce needless confusion about economic facts so we can focus on real economic challenges. And it just might remind me what happened to federal employment in early 2009.
A trader works on the floor of the New York Stock Exchange, in this June 2012 file photo in New York City. High-frequency quoting that has skyrocketed recently, leading some in the US to think it might be time for a financial non-transaction tax, like France implemented. (Mary Altaffer/AP)
The rise of trading quote spam
On Monday, I posted a lovely animated gif from Nanex showing the rise of high-frequency trading. What I failed to mention is that graph doesn’t show completed trades. It shows quotes.
And according to another nice chart from Nanex, it’s high-frequency quoting that has skyrocketed, not trading.
The number of unexecuted quotes, many allegedly not intended to be executed, has thus skyrocketed.
France recently took steps to try to deter the rise in quotes. In addition to a financial transactions tax it, France will also impose a tax on traders who submit too many unfilled quotes.
In short, France will levy a financial non-transaction tax.
Republican presidential candidate Mitt Romney campaigns at Central Campus High School in Des Moines, Iowa, Wednesday, Aug. 8, 2012. (Charles Dharapak/AP)
Does Mitt Romney really want to raise taxes on the middle class?
The Tax Policy Center’s latest research report went viral last week, drawing attention in the presidential campaign and sparking a constructive discussion of the practical challenges of tax reform. Unfortunately, the response has also included some unwarranted inferences from one side and unwarranted vitriol from the other, distracting from the fundamental message of the study: tax reform is hard.
The paper, authored by Sam Brown, Bill Gale, and Adam Looney, examines the challenges policymakers face in designing a revenue-neutral income tax reform. The paper illustrates the importance of the tradeoffs among revenue, tax rates, and progressivity for the tax policies put forward by presidential candidate Mitt Romney. It found, subject to certain assumptions I discuss below, that any revenue-neutral plan along the lines Governor Romney has outlined would reduce taxes for high-income households, requiring higher taxes on middle- or low-income households. I doubt that’s his intent, but it is an implication of what we can tell about his plan so far. (We look forward to updating our analysis, of course, if and when Governor Romney provides more details.)
The paper is the latest in a series of TPC studies that have documented both the promise and the difficulty of base-broadening, rate-lowering tax reform. Last month, for example, Hang Nguyen, Jim Nunns, Eric Toder, and Roberton Williams documented just how hard it can be to cut tax preferences to pay for lower tax rates. An earlier paper by Dan Baneman and Eric Toder documented the distributional impacts of individual income tax preferences.
The new study applies those insights to Governor Romney’s tax proposal. To do so, the authors had to confront a fundamental challenge: Governor Romney has not offered a fully-specified plan. He has been explicit about the tax cuts he has in mind, including a one-fifth reduction in marginal tax rates from today’s level, which would drop the top rate from 35 percent to 28 percent and a cut in capital gains and dividend taxes for families with incomes below $200,000. He and his team have also said that reform should be revenue-neutral and not increase taxes on capital gains and dividends. But they have not provided any detail about what tax preferences they would cut to make up lost revenue.
As a political matter, such reticence is understandable. To sell yourself and your policy, it’s natural to emphasize the things that people like, such as tax cuts, while downplaying the specifics of who will bear the accompanying costs. Last February, President Obama did the same thing when he rolled out his business tax proposal. The president was very clear about lowering the corporate rate from 35 percent to 28 percent, but he provided few examples of the tax breaks he would cut to pay for it. Such is politics.
For those of us in the business of policy analysis, however, this poses a challenge. TPC’s goal is to inform the tax policy debate as best we can. While we strongly prefer to analyze complete plans, that sometimes isn’t possible. So we provide what information we can with the resources available. Earlier this year, for example, we analyzed the specified parts of Governor Romney’s proposal and documented how much revenue he would have to make up by unspecified base broadening (or, possibly, macroeconomic growth) and how the rate cuts would affect households at different income levels.
The latest study asked a different question: Could Romney’s plan maintain current progressivity given revenue neutrality and reasonable assumptions about what types of base broadening he’d propose? There are roughly $1.3 trillion in tax expenditures out there, but not all will be on Governor Romney’s list. He has said, for example, that raising capital gains and dividend taxes isn’t an option and has generally spoken about lowering taxes on saving and investment. Based on those statements, the authors considered what would happen if Romney kept all the tax breaks associated with saving and investment, including not only the lower rates on capital gains and dividends, but also the special treatment for municipal bonds, IRA and 401ks, and certain life-insurance plans, as well as the ability to avoid capital gains taxes at death (known as step-up basis). The authors also recognized that touching some tax breaks is beyond the realm of political possibility, such as taxing the implicit rent people get from owning their own home.
Given those factors, the study then examined the most progressive way of reducing the other tax breaks that remain on the table—i.e. it rolls them back first for high-income people. But there aren’t enough of those preferences to offset the benefits that high-income households get from the rate reductions. As a result, a revenue-neutral reform within these constraints would cut taxes at the high-end while raising them in the middle and perhaps bottom.
What should we infer from this result? Like Howard Gleckman, I don’t interpret this as evidence that Governor Romney wants to increase taxes on the middle class in order to cut taxes for the rich, as an Obama campaign ad claimed. Instead, I view it as showing that his plan can’t accomplish all his stated objectives. One can charitably view his plan as a combination of political signaling and the opening offer in what would, if he gets elected, become a negotiation.
To get a sense of where such negotiation might lead, keep in mind that Romney’s plan is not the first to propose a 28 percent top rate. The Tax Reform Act of 1986 did, as did the Bowles-Simpson proposal and the similar Domenici-Rivlin effort (on which I served). Unlike Governor Romney’s proposal, all three of those tax reforms reflect political compromise. And in all three cases, part of that compromise was eliminating some tax preferences for saving and investment, which tend to be especially important for high-income taxpayers. In particular, all three reforms resulted in capital gains and dividends being taxed at ordinary income tax rates.
TPC’s latest study highlights the realities that lead to such compromises.
This chart shows the percentage of jobs on government payrolls since 1940. The figure has always hovered between 10 and 20 percent, but government jobs take up a much smaller share of US employment than they did 40 years ago. (Donald Marron/FRED)
Government employment: growing or shrinking?
My recent post on government size prompted several readers to ask a natural follow-up question: how has the government’s role as employer changed over time?
To answer, the following chart shows federal, state, and local employment as a share of overall U.S. payrolls:
In July, governments accounted for 16.5 percent of U.S. employment. That’s down from the 17.7 percent peak in early 2010, when the weak economy, stimulus efforts, and the decennial census all boosted government’s share of employment. And it’s down from the levels of much of the past forty years.
On the other hand, it’s also up from the sub-16 percent level reached back in the go-go days of the late 1990s and early 2000s.
Employment thus tells a similar story to government spending on goods and services: if we set the late 1990s to one side, federal, state, and local governments aren’t large by historical standards; indeed, they are somewhat smaller than over most of the past few decades. And they’ve clearly shrunk, in relative terms, over the past couple of years. (But, as noted in my earlier post, overall government spending has grown because of the increase in transfer programs.)
P.S. Like my previous chart on government spending, this one focuses on the size of government relative to the rest of the economy (here measured by nonfarm payroll employment). Over at the Brookings Institution’s Hamilton Project, Michael Greenstone and Adam Looney find a more severe drop in government employment than does my chart. The reason is that they focus on government employment as a share of the population, while my chart compares it to overall employment. That’s an important distinction given the dramatic decline in employment, relative to the population, in recent years.
P.P.S. As Ernie Tedeschi notes, this measure doesn’t capture government contractors. So any change in the mix of private contractors vs. direct employees will affect the ratio. This is another reason why focusing on spending metrics may be better than employment figures.
This chart shows how government spending has contributed to economic activity over the past six decades. (Donald Marron, BEA)
Has government gotten bigger or smaller? Both.
Politicians and pundits constantly debate the size of government. Is it big or small? Growing or shrinking?
You might hope these simple questions have simple answers. But they don’t. Measuring government size is not as easy as it sounds. For example, official statistics track two different measures of government spending. And those measures tell different stories:
The blue line in the chart above shows how much federal, state, and local governments directly contribute to economic activity, measured as a share of overall gross domestic product (GDP). If you’ve ever taken an intro economics class, you know that contribution as G, shorthand for government spending. G represents all the goods and services that governments provide, valued at the cost of producing them. G thus includes everything from buying aircraft carriers to paying teachers to housing our ambassador in Zambia.
At 19.5 percent of GDP, G is down from the 21.5 percent it hit in the worst days of the Great Recession. As Catherine Rampell of the New York Times pointed out last week, it’s also below the 20.3 percent average of the available data back to 1947. For most of the past 65 years, federal, state, and local governments had a larger direct economic role producing goods and services than they do today.
There’s one notable exception: today’s government consumption and investment spending is notably larger than it was during the economic boom and fiscal restraint of the late 1990s and early 2000s. From mid-1996 to mid-2001, government accounted for less than 18 percent of GDP. Relative to that benchmark, government is now noticeably larger.
The orange line shows a broader measure that captures all the spending in government budgets—all of G plus much more. Governments pay interest on their debts. More important, they make transfer payments through programs like Social Security, Medicare, Medicaid, food stamps, unemployment insurance, and housing vouchers. Transfer spending does not directly contribute to GDP and thus is not part of G. Instead, it provides economic resources to people (and some businesses) that then show up in other GDP components such as consumer spending and private investment.
This broader measure of government spending is much larger than G alone. In 2011, for example, government spending totaled $5.6 trillion, about 37 percent of GDP. But only $3.1 trillion (20 percent of GDP) went for goods and services. The other $2.5 trillion (17 percent) covered transfers and interest.
Like G, this broader measure of government has declined since the (official) end of the Great Recession. Since peaking at 39 percent in the second quarter of 2009, it has fallen to 36 percent in the second quarter of 2012.
Also like G, this measure has grown since the boom of the late 1990s and early 2000s. In the middle of 2000, government spending totaled just 30 percent of GDP, a full 6 percentage points less than today.
The two measures thus agree on recent history: government has shrunk over the past three years as the economy has slowly recovered from the Great Recession and government policy responses have faded. But government spending is still notably larger than at the turn of the century.
The story changes, however, if we look further back in time. Although governments spent more on goods and services in the past, total spending was almost always lower. Since 1960, when data on the broader measure begin, total government spending has averaged about 32 percent. It never reached today’s 36 percent until 2008, when the financial crisis began in earnest.
Much of the recent increase in overall spending is due to the severity of the downturn. But that’s not the only factor. Government’s economic role has changed. As recently as the early 1960s, federal, state, and local governments devoted most of their efforts to providing public goods and services. Now they devote large portions of their budgets to helping people through cash and in-kind transfers—programs like Medicare and Medicaid that were created in 1965 and account for much of the growth in the gap between the orange and blue lines.
Government thus has gotten bigger. But it’s also gotten smaller. It all depends on the time period you consider and the measure you use.
P.S. Keep in mind that this discussion focuses on a relative measure of government size—the ratio of government spending to the overall economy—not an absolute one. Government thus expands if government spending grows faster than the economy and contracts if the reverse is true.
P.P.S. Measuring government size poses other challenges. Eric Toder and I discuss several in our paper “How Big is the Federal Government?” Perhaps most important is that governments now do a great deal of spending through the tax code. Traditional spending numbers thus don’t fully reflect the size or trend in government spending. For more, see this earlier post.
This chart shows how key sectors of the economy fared in the second quarter of 2012. Housing was a surprising bright spot, but overall growth was disappointing. (Donald Marron/Bureau of Economiuc Analysis)
Economic growth lurches to 1.5 percent
The economy grew at a tepid 1.5% annual rate in the second quarter, according to the latest BEA estimates. That’s far below the pace we need to reduce unemployment.
Weak growth was driven by a slowdown in consumer spending and continued cuts in government spending (mostly at the state and local level), which overshadowed rapid growth in investment spending on housing–yes, housing–and equipment and software:
Housing investment expanded at almost a 10% rate in the second quarter, its fifth straight quarter of growth. Government spending declined at a 1.4% rate, its eighth straight quarter of decline.
A woman places a milk carton on the shelf in a Fivimart supermarket in Hanoi in this June 2012 file photo. A famous study conducted in 2000 with jam varieties identified something called the Paradox of Choice, which says that the more choices a person has, the lower the chance of a consumer actually buying something. (Kham/Reuters)
The Paradox of Choice: A theory loses favor
Does your brain freeze when offered too many options? Do you put off repainting your bathroom because you can’t bear to select among fifty shades of white (or, for the more adventurous, grey)?
If so, take heart. A famous experiment by psychologists Mark Lepper and Sheena Iyengar, published in 2000, suggests that you are not alone. In supermarket tests, they documented what’s known as the Paradox of Choice. Customers offered an array of six new jam varieties were much more likely to buy one than those offered a choice of 24.
That makes no sense in the narrow sense of rationality often used in simple economic models. More choice should always lead to more sales, since the odds are greater that a shopper will find something they want. But it didn’t. On those days, in those supermarkets, with those jams, more choice meant less buying.
This result resonates with many people. I certainly behave that way occasionally. With limited time and cognitive energy, I sometimes avoid or defer choices that I don’t absolutely need to make … like buying a new jam. Making decisions is hard. Just as consumers have financial budget constraints, so too do we have decision-making budget constraints.
Today’s TED Blog provides links and, naturally, videos for a series of studies documenting similar challenges of choice, from retirement planning to health care to spaghetti sauce. All well worth a view.
But how general are these results? Perhaps not as much as we’d think from the TED talks. A few years ago, Tim Harford, the Financial Times’ Undercover Economist, noted that some subsequent studies in the jam tradition failed to find this effect:
It is hard to find much evidence that retailers are ferociously simplifying their offerings in an effort to boost sales. Starbucks boasts about its “87,000 drink combinations”; supermarkets are packed with options. This suggests that “choice demotivates” is not a universal human truth, but an effect that emerges under special circumstances.
Benjamin Scheibehenne, a psychologist at the University of Basel, was thinking along these lines when he decided (with Peter Todd and, later, Rainer Greifeneder) to design a range of experiments to figure out when choice demotivates, and when it does not.
But a curious thing happened almost immediately. They began by trying to replicate some classic experiments – such as the jam study, and a similar one with luxury chocolates. They couldn’t find any sign of the “choice is bad” effect. Neither the original Lepper-Iyengar experiments nor the new study appears to be at fault: the results are just different and we don’t know why.
After designing 10 different experiments in which participants were asked to make a choice, and finding very little evidence that variety caused any problems, Scheibehenne and his colleagues tried to assemble all the studies, published and unpublished, of the effect.
The average of all these studies suggests that offering lots of extra choices seems to make no important difference either way. There seem to be circumstances where choice is counterproductive but, despite looking hard for them, we don’t yet know much about what they are. Overall, says Scheibehenne: “If you did one of these studies tomorrow, the most probable result would be no effect.”
In short, the Paradox of Choice is experiencing the infamous Decline Effect. As Jonah Lehrer noted in the New Yorker in late 2010, sometimes what seems to be scientific truth “wears off” over time. And not just in “soft” sciences like the intersection of psychology and economics, but in biology and medicine as well.
Some of that decline reflects selection pressures in research and publishing … and invitations to give TED talks. It’s easy to get a paper published if it documents a new a paradox or anomaly. Only after that claim has gained some mindshare does the marketplace then open to research showing null results of no paradox.
Former Florida Governor Jeb Bush testifies before a House Budget Committee hearing on "Removing the Barriers to Free Enterprise and Economic Growth" on Capitol Hill in Washington in this June 2012 file photo. Flanking Bush are Chris Edwards director of tax policy studies at the Cato Institute, and Rep. Henry Waxman (D-CA). Tax policy is a complicated problem in America, and one that has yet to be effectively addressed. (Kevin Lamarque/Reuters)
How behavioral science can improve tax policy
In Sunday’s New York Times, Richard Thaler laments that “as a general rule, the United States government is run by lawyers who occasionally take advice from economists.”
That makes for better policy than a tyranny of lawyers alone. But it certainly isn’t enough. Policy is ultimately about changing the way people behave. And to do that, you need to understand more than just economics (as an increasing number of economists, Thaler foremost among them, already recognize).
Thaler thus makes two important suggestions: First, he argues that behavioral scientists deserve a greater formal role in the policy process, perhaps even a Council of Behavioral Science Advisers that would advise the White House in parallel with the Council of Economic Advisers. Second, he urges government to engage in more experimentation so it can learn just what policy choices best drive behavior, and how.
As an example, he cites the efforts of Britain’s Behavioral Insights Team, which was created when David Cameron’s coalition government came to office in 2010.
As its name implies, the team (which he advises) works with government agencies to explore how behavioral insights can make policy more effective. Tax compliance is one example.
Each year, Britain sends letters to certain taxpayers—primarily small businesses and individuals with non-wage income—directing them to make appropriate tax payments within six weeks. If they fail to do so, the government follows up with more costly measures. Enter the Behavioral Insights Team:
The tax collection authority wondered whether this letter might be improved. Indeed, it could.
The winning recipe comes from Robert B. Cialdini, an emeritus professor of psychology and marketing at Arizona State University, and author of the book “Influence: The Psychology of Persuasion.”
People are more likely to comply with a social norm if they know that most other people comply, Mr. Cialdini has found. (Seeing other dog owners carrying plastic bags encourages others to do so as well.) This insight suggests that adding a statement to the letter that a vast majority of taxpayers pay their taxes on time could encourage others to comply. Studies showed that it would be even better to cite local data, too
Letters using various messages were sent to 140,000 taxpayers in a randomized trial. As the theory predicted, referring to the social norm of a particular area (perhaps, “9 out of 10 people in Exeter pay their taxes on time”) gave the best results: a 15-percentage-point increase in the number of people who paid before the six-week deadline, compared with results from the old-style letter, which was used as a control condition.
Rewriting the letter thus materially improved tax compliance. That’s an important insight, and I hope it scales if and when Britain’s tax authority applies it more broadly.
But there’s a second lesson as well: the benefit of running policy experiments. Policymakers have no lack for theories about how people will respond to various policy changes. What they often do lack, however, is evidence about which theory is correct or how big the potential effects are. Governments on both sides of the Atlantic should look for opportunities to run such controlled experiments so that, to paraphrase Thaler, evidence-based policies can be based on actual evidence.



Previous




Become part of the Monitor community