Beat the Press

Beat the press por Dean Baker

Beat the Press is Dean Baker's commentary on economic reporting. He is a Senior Economist at the Center for Economic and Policy Research (CEPR). To never miss a post, subscribe to a weekly email roundup of Beat the Press. Please also consider supporting the blog on Patreon.

The June jobs report was damn good news. The 850,000 new jobs created was at the high end of what I imagined to be possible. There is a limit to how rapidly businesses can hire. It is easiest when it’s just a matter of recalling workers who are laid off. But the vast pool of people on temporary layoffs has dwindled. As I pointed out, the share of the unemployment due to temporary layoffs had fallen to a level that was normal for a recession. It was 19.0 percent in June, down from a peak of 77.9 percent last April.

We also are looking at a situation in which an extraordinarily large share of the unemployed are long-term unemployed (more than twenty-six weeks). Historically, it has been harder for this group of workers to find new jobs.

For these reasons, it didn’t seem likely that we could have the sort of million plus monthly job growth that we saw last summer. In that context, adding 850,000 jobs in a month is probably about as good as we could hope for.

The strong job growth was associated with strong wage growth, especially for workers in lower-paying sectors. This is consistent with the hard to get good help story that we are constantly hearing about in the business press. Of course, it is not really impossible in most cases to get more workers, restaurants added 195,000 jobs in June, employers just have to pay more money.

The story in the June data is that workers are getting pay increases, and this is especially the case for workers at the bottom of the wage ladder. The data from the Bureau of Labor Statistics’ establishment survey are not ideal for measuring wage growth for different groups (the Current Population Survey is much better, but the monthly and even quarterly data are very noisy) but we can get a general picture.

The data for the last year are somewhat skewed by composition effects (the lowest paid workers lost their jobs, thereby raising average pay), but if we take the averages for the last two years, with most workers now rehired, the impact of composition changes is more limited. What we see is that average wage growth has been strong over this period, but it has been strongest for the lowest paid workers.

Source: Bureau of Labor Statistics and author’s calculations.

As the chart shows, the average hourly wage for all workers increased at an average annual rate of 4.3 percent. If we look at the average for all production and non-supervisory workers, a category which excludes most higher paid workers, the average annual increase has been 4.6 percent. It has been even higher in the industries with the lowest pay. Average annual increases in retail has been 5.8 percent, while in the category that includes hotel and restaurant workers it was 6.0 percent.   

This is a big deal for these workers. In the case of hotel and restaurant workers, the increases over the last two years come to $1.77 an hour. For someone working a full-time full-year job (many of these workers only work 20-30 hours a week), this would mean a pay increase of more than $3,500 a year.

Of course, how much this translates into higher living standards will depend on inflation. Inflation over the last two years has averaged 2.6 percent annually. This means that the lowest paid workers still got large pay increases, even after adjusting for the rise in prices. After adjusting for inflation, the average hourly wage for retail workers still rose by more 6.4 percent over the last two years. For restaurant workers the increase a bit less than 7.0 percent.

We will likely see a somewhat slower pace of wage growth once the surge of reopening hiring is over. As pandemic restrictions have ended over the last few months, many businesses rushed to staff up to accommodate more customers. This led to a record number of job openings reported for April. The story is likely to be similar in the May data released this week, but we will probably be through this stretch by the end of the summer.

Also, the normal rise in seasonal demand has added to difficulty for employers in finding workers. Many hotels and restaurants always add to their staff in summer months. This is a seasonal effect that is accounted for in our seasonal adjustments. But when this additional hiring coincides with the ending of the pandemic, it makes hiring considerably more difficult. However, when we get to September. The situation is reversed, with seasonal workers being laid off.

On the negative side, this means that the bargaining power that many lower paid workers enjoy at the moment is likely to be eroded quickly. On the plus side, there is less reason to fear that we are seeing the beginning of an inflationary spiral, with higher wages forcing price increases, which then lead to higher wage demands.

Wage Growth and Inflation

It is also important to remember that lower paid workers account for a relatively small portion of the total wage bill. If all workers were seeing 6.0 percent annual pay increases, it almost certainly would lead to higher inflation. But these sorts of pay hikes in the restaurant and retail sector have little impact on overall inflation.

The 4.3 percent average rate of wage growth overall is roughly a percentage point higher than the rates we were seeing before the pandemic, but this can be largely absorbed in a lower profit share and more rapid pace of productivity growth. A decline in profit shares would just be reversing the rise we saw following the Great Recession. It may be bad news for the stock market, but good news for just about everyone who didn’t own large amounts of stock.

Productivity growth has been extraordinary since the recession. In the year from the first quarter of 2020 to the first quarter of 2021 productivity rose 4.1 percent. This compares to average annual rate of just over 1.0 percent in the prior decade. With GDP likely to show an increase of close to 8.0 percent in the second quarter, the rate of productivity growth will again be close to 4.0 percent in the current quarter.

Productivity growth is always erratic, especially around recessions. No one expects the economy to sustain anything like a 4.0 percent rate of productivity growth, but businesses were forced to find new ways of operating in the pandemic. Many of these changes led to more efficient ways of doing business. As innovations diffuse more widely, it is very plausible that we will see substantially more rapid productivity growth for at least the next few years. This will allow for more rapid wage growth without inflation.

Trends in Oil

Oil prices are an area that provide some basis for concern in the overall inflation picture. While oil is far less important to the economy than it was in the 1970s, it is still important, and rising oil prices show up as higher inflation in not obvious places like rent (which often includes utilities) and airfares, which are highly responsive to the price of jet fuel. Of course, higher gas prices are highly visible and likely to be an issue raised in elections.

Oil prices had plummeted during the pandemic, with future prices actually turning negative, meaning that it was necessary to pay people to commit to taking delivery of oil. Crude prices have been rising consistently since last fall, with the current price hovering near $75 a barrel, roughly the same as the peak levels in the years just before the pandemic.

It’s not clear if this sort of price can be sustained long. There are many places in the world where oil can be profitably produced at $75 aa barrel, but not at $50 or even $60 a barrel. Production was shut down in these areas in the pandemic, but we can expect many to be coming back on line in the next few months.

There is also another factor that could put serious downward pressure on oil prices. If oil producers take seriously the commitments to electric cars and clean energy by the United States and other major consuming nations, then they will realize that they have an asset whose value is likely to plummet in coming years. In that context, it makes sense to try to produce as much as possible while the price is still reasonably high.

Clearly this is not happening now, and most projections show oil demand continuing to rise modestly throughout the decade. But it is possible to imagine that aggressive moves towards clean energy could change this picture and create a climate of fear among oil producers.

Progress, But Not Home Yet

Given the depths to which the economy sank last spring, and the huge surge in coronavirus cases and deaths this winter, you have to be pretty happy with the way things stand now with the economy and the pandemic. In the case of the latter, the daily rate of both cases and deaths is down by far more than 90 percent from the winter peaks. In the states with the highest vaccination rates, the number of cases reported daily is down by more than 99 percent.

We may not see more months of 850,000 job growth, but it certainly is reasonable to believe that we can stay in a range between 500,000 and 700,000 at least through the rest of the year. Since we are still down 6.5 million jobs from before the pandemic started, this means we won’t make up the jobs lost until the winter. It will be even longer until we can get back to the pre-pandemic trend and get back jobs that should have been created over the last year and a half.[1]

Still, the picture looks hugely better than it did six months ago. If Congress can use the summer to pass legislation dealing with longer term problems, like addressing global warming, improving child care and home health care, fixing Medicare, and making health care more affordable generally, the picture will be even better.

[1] We may end up a somewhat lower trend growth path if, for example, some older workers choose to retire earlier than they had planned before the pandemic. More early retirements is not a bad thing, if it is voluntary, but it does reduce the size of the workforce.

The June jobs report was damn good news. The 850,000 new jobs created was at the high end of what I imagined to be possible. There is a limit to how rapidly businesses can hire. It is easiest when it’s just a matter of recalling workers who are laid off. But the vast pool of people on temporary layoffs has dwindled. As I pointed out, the share of the unemployment due to temporary layoffs had fallen to a level that was normal for a recession. It was 19.0 percent in June, down from a peak of 77.9 percent last April.

We also are looking at a situation in which an extraordinarily large share of the unemployed are long-term unemployed (more than twenty-six weeks). Historically, it has been harder for this group of workers to find new jobs.

For these reasons, it didn’t seem likely that we could have the sort of million plus monthly job growth that we saw last summer. In that context, adding 850,000 jobs in a month is probably about as good as we could hope for.

The strong job growth was associated with strong wage growth, especially for workers in lower-paying sectors. This is consistent with the hard to get good help story that we are constantly hearing about in the business press. Of course, it is not really impossible in most cases to get more workers, restaurants added 195,000 jobs in June, employers just have to pay more money.

The story in the June data is that workers are getting pay increases, and this is especially the case for workers at the bottom of the wage ladder. The data from the Bureau of Labor Statistics’ establishment survey are not ideal for measuring wage growth for different groups (the Current Population Survey is much better, but the monthly and even quarterly data are very noisy) but we can get a general picture.

The data for the last year are somewhat skewed by composition effects (the lowest paid workers lost their jobs, thereby raising average pay), but if we take the averages for the last two years, with most workers now rehired, the impact of composition changes is more limited. What we see is that average wage growth has been strong over this period, but it has been strongest for the lowest paid workers.

Source: Bureau of Labor Statistics and author’s calculations.

As the chart shows, the average hourly wage for all workers increased at an average annual rate of 4.3 percent. If we look at the average for all production and non-supervisory workers, a category which excludes most higher paid workers, the average annual increase has been 4.6 percent. It has been even higher in the industries with the lowest pay. Average annual increases in retail has been 5.8 percent, while in the category that includes hotel and restaurant workers it was 6.0 percent.   

This is a big deal for these workers. In the case of hotel and restaurant workers, the increases over the last two years come to $1.77 an hour. For someone working a full-time full-year job (many of these workers only work 20-30 hours a week), this would mean a pay increase of more than $3,500 a year.

Of course, how much this translates into higher living standards will depend on inflation. Inflation over the last two years has averaged 2.6 percent annually. This means that the lowest paid workers still got large pay increases, even after adjusting for the rise in prices. After adjusting for inflation, the average hourly wage for retail workers still rose by more 6.4 percent over the last two years. For restaurant workers the increase a bit less than 7.0 percent.

We will likely see a somewhat slower pace of wage growth once the surge of reopening hiring is over. As pandemic restrictions have ended over the last few months, many businesses rushed to staff up to accommodate more customers. This led to a record number of job openings reported for April. The story is likely to be similar in the May data released this week, but we will probably be through this stretch by the end of the summer.

Also, the normal rise in seasonal demand has added to difficulty for employers in finding workers. Many hotels and restaurants always add to their staff in summer months. This is a seasonal effect that is accounted for in our seasonal adjustments. But when this additional hiring coincides with the ending of the pandemic, it makes hiring considerably more difficult. However, when we get to September. The situation is reversed, with seasonal workers being laid off.

On the negative side, this means that the bargaining power that many lower paid workers enjoy at the moment is likely to be eroded quickly. On the plus side, there is less reason to fear that we are seeing the beginning of an inflationary spiral, with higher wages forcing price increases, which then lead to higher wage demands.

Wage Growth and Inflation

It is also important to remember that lower paid workers account for a relatively small portion of the total wage bill. If all workers were seeing 6.0 percent annual pay increases, it almost certainly would lead to higher inflation. But these sorts of pay hikes in the restaurant and retail sector have little impact on overall inflation.

The 4.3 percent average rate of wage growth overall is roughly a percentage point higher than the rates we were seeing before the pandemic, but this can be largely absorbed in a lower profit share and more rapid pace of productivity growth. A decline in profit shares would just be reversing the rise we saw following the Great Recession. It may be bad news for the stock market, but good news for just about everyone who didn’t own large amounts of stock.

Productivity growth has been extraordinary since the recession. In the year from the first quarter of 2020 to the first quarter of 2021 productivity rose 4.1 percent. This compares to average annual rate of just over 1.0 percent in the prior decade. With GDP likely to show an increase of close to 8.0 percent in the second quarter, the rate of productivity growth will again be close to 4.0 percent in the current quarter.

Productivity growth is always erratic, especially around recessions. No one expects the economy to sustain anything like a 4.0 percent rate of productivity growth, but businesses were forced to find new ways of operating in the pandemic. Many of these changes led to more efficient ways of doing business. As innovations diffuse more widely, it is very plausible that we will see substantially more rapid productivity growth for at least the next few years. This will allow for more rapid wage growth without inflation.

Trends in Oil

Oil prices are an area that provide some basis for concern in the overall inflation picture. While oil is far less important to the economy than it was in the 1970s, it is still important, and rising oil prices show up as higher inflation in not obvious places like rent (which often includes utilities) and airfares, which are highly responsive to the price of jet fuel. Of course, higher gas prices are highly visible and likely to be an issue raised in elections.

Oil prices had plummeted during the pandemic, with future prices actually turning negative, meaning that it was necessary to pay people to commit to taking delivery of oil. Crude prices have been rising consistently since last fall, with the current price hovering near $75 a barrel, roughly the same as the peak levels in the years just before the pandemic.

It’s not clear if this sort of price can be sustained long. There are many places in the world where oil can be profitably produced at $75 aa barrel, but not at $50 or even $60 a barrel. Production was shut down in these areas in the pandemic, but we can expect many to be coming back on line in the next few months.

There is also another factor that could put serious downward pressure on oil prices. If oil producers take seriously the commitments to electric cars and clean energy by the United States and other major consuming nations, then they will realize that they have an asset whose value is likely to plummet in coming years. In that context, it makes sense to try to produce as much as possible while the price is still reasonably high.

Clearly this is not happening now, and most projections show oil demand continuing to rise modestly throughout the decade. But it is possible to imagine that aggressive moves towards clean energy could change this picture and create a climate of fear among oil producers.

Progress, But Not Home Yet

Given the depths to which the economy sank last spring, and the huge surge in coronavirus cases and deaths this winter, you have to be pretty happy with the way things stand now with the economy and the pandemic. In the case of the latter, the daily rate of both cases and deaths is down by far more than 90 percent from the winter peaks. In the states with the highest vaccination rates, the number of cases reported daily is down by more than 99 percent.

We may not see more months of 850,000 job growth, but it certainly is reasonable to believe that we can stay in a range between 500,000 and 700,000 at least through the rest of the year. Since we are still down 6.5 million jobs from before the pandemic started, this means we won’t make up the jobs lost until the winter. It will be even longer until we can get back to the pre-pandemic trend and get back jobs that should have been created over the last year and a half.[1]

Still, the picture looks hugely better than it did six months ago. If Congress can use the summer to pass legislation dealing with longer term problems, like addressing global warming, improving child care and home health care, fixing Medicare, and making health care more affordable generally, the picture will be even better.

[1] We may end up a somewhat lower trend growth path if, for example, some older workers choose to retire earlier than they had planned before the pandemic. More early retirements is not a bad thing, if it is voluntary, but it does reduce the size of the workforce.

Yes, it is that time of month again. As I always say, this sort of comparison is silly, since there are so many factors determining job growth that have nothing to do with the person in the White House. But, we all know that Trump and the Republicans would be touting this to the sky if the shoe were on the other foot.

So, here’s the latest, the economy has created more than 3 million jobs in the first five months of the Biden administration. It lost almost 2.9 million jobs in the four years of the Trump administration. Biden has now created more jobs than Trump lost.

Source: Bureau of Labor Statistics.

Yes, it is that time of month again. As I always say, this sort of comparison is silly, since there are so many factors determining job growth that have nothing to do with the person in the White House. But, we all know that Trump and the Republicans would be touting this to the sky if the shoe were on the other foot.

So, here’s the latest, the economy has created more than 3 million jobs in the first five months of the Biden administration. It lost almost 2.9 million jobs in the four years of the Trump administration. Biden has now created more jobs than Trump lost.

Source: Bureau of Labor Statistics.

There are lots of silly comments that pass for great wisdom in elite circles. Steve Rattner gave us one of my favorites in his NYT column warning President Biden against putting too much money into reviving our system of train travel.

Rattner tells us:

“America is not Europe, with its dense population centers clustered reasonably close together.”

This is of course true, but in a totally trivial sense. The density of our population per mile of land is much lower than in Europe, especially if we include Alaska. But this is completely beside the point when it comes to trains. The issue is not building passenger lines from New York to Fairbanks, it’s about connecting cities that actually are reasonably close to together.

For example, Chicago is 790 miles from New York. By contrast, Berlin is 670 miles from Paris. If we stretch the trip to Warsaw the distance is over 1000 miles. And, we have many major cities in the Midwest that are closer to New York than Chicago, such as Cleveland, Detroit, and Cincinnati.

In short, if we think about the issue seriously, the difference in population density between the U.S. and Europe should not affect the feasibility of train service in the United States. As a practical matter, we have found it very difficult to build high speed rail for a variety of reasons that Rattner notes. We must address these problems if we are going to have viable passenger train service, but density is simply not the issue.

There are lots of silly comments that pass for great wisdom in elite circles. Steve Rattner gave us one of my favorites in his NYT column warning President Biden against putting too much money into reviving our system of train travel.

Rattner tells us:

“America is not Europe, with its dense population centers clustered reasonably close together.”

This is of course true, but in a totally trivial sense. The density of our population per mile of land is much lower than in Europe, especially if we include Alaska. But this is completely beside the point when it comes to trains. The issue is not building passenger lines from New York to Fairbanks, it’s about connecting cities that actually are reasonably close to together.

For example, Chicago is 790 miles from New York. By contrast, Berlin is 670 miles from Paris. If we stretch the trip to Warsaw the distance is over 1000 miles. And, we have many major cities in the Midwest that are closer to New York than Chicago, such as Cleveland, Detroit, and Cincinnati.

In short, if we think about the issue seriously, the difference in population density between the U.S. and Europe should not affect the feasibility of train service in the United States. As a practical matter, we have found it very difficult to build high speed rail for a variety of reasons that Rattner notes. We must address these problems if we are going to have viable passenger train service, but density is simply not the issue.

I know I harp a lot on all the ways we structure the market to redistribute income upward, but that’s because we keep digging in deeper on these policies, and almost no one else talks about it. I get that it’s cool to talk about all sorts of tax and transfer schemes to redistribute some of the money we give to the rich and super-rich. But, I’m one of those old-fashion sorts who think it’s simpler just not to give them all the money in the first place. So, now that you have been warned, here again is my short list of ways to not give so much money to rich people.

Patent and Copyright Monopolies

The immediate issue that prompts this tirade was a request by President Biden for another $6.5 billion  (0.15 percent of the budget) in 2022 to support research into diseases like cancer, diabetes, and Alzheimer’s. I’m not upset at all that the federal government is spending more money on research in these areas.

In fact, I think more federal funding of research into these and other areas of biomedical research is great. The problem is that we can be all but certain that all the breakthroughs that may be realized as a result of this spending will result in patent monopolies that will be very profitable for the companies that are awarded them.

If this is too abstract for people, then think of Moderna, a company that saw its stock price increase more than 1000 percent since the pandemic began, creating more than $80 billion in stock wealth. Obviously, the main reason for this run-up was its Covid vaccine, which was developed almost entirely on the taxpayer’s dime. We can get angry that so many people became millionaires or billionaires on taxpayer funded research, but when we pay for the research and then give the company a patent monopoly, what else did we think would happen?

The alternative is to pay for the research and have it placed in the public domain. This means both, that all the findings are fully public so that other researchers can learn from them and build on them, and also that all patents are placed in the public domain. That means that anything developed can be produced as a cheap generic from the day it is approved by the FDA.

With respect to the vaccines, it is also worth mentioning that if we had gone the open-source route, we could have required that all the technology involved in the production process would also be freely shared. One of the problems with increasing production of the vaccines is that, even if we removed patent protection, most manufacturers would not have the necessary technical expertise to begin producing the vaccines immediately. However, if a condition of getting public funding was that this technology would be freely shared, then potential producers anywhere in the world would be able to get technical assistance in setting up their facilities.[1]

Another huge advantage of going the open-source route came up with the FDA’s decision to approve the Alzheimer’s drug, Aduhelm. In approving this drug, the FDA overruled the recommendation of its advisory panel, a step which it rarely takes. The panel argued that the evidence for the drug’s effectiveness was very weak, and there are serious side effects, which means that many patients may be made worse off by taking the drug.

Biogen, the maker of Aduhelm, announced that it would price the drug at $56,000 for a year’s dosage. With over 6 million people suffering from Alzheimer’s, this could mean tens of billions a year in revenue for Biogen, with most of it paid by the federal government through Medicare and Medicaid.

But even beyond the issue of the money, there is also the concern that the FDA’s decision may have been influenced by the lobbying efforts of Biogen. Many researchers get support from Biogen, and it’s hard to believe that their assessment of the drug is not affected by the money they receive. If we took the money out of the equation and were looking at a situation where Adulhelm was going to be produced as a cheap generic, there would be little reason for researchers not to give their honest assessment of the evidence of the drug’s safety and effectiveness. This is a reason that open-source research is likely to lead to better outcomes.

Having cheap drugs and vaccines would not only mean that some of the rich are less rich, but it also raises incomes for everyone who is not benefitting from patent monopolies. If we pay less for drugs, then the real value of everyone’s paycheck goes up. If we had less of a role for patent monopolies, not only for prescription drugs, but also for medical equipment, computers, software, and other technologies, the price of a large set of goods and services would fall sharply, hugely increasing real wages.

 

Downsizing Finance

The United States has a hugely bloated financial sector, which is responsible for many of the country’s great fortunes. This should not be a source of pride.

To restate the Econ 101 definition for the purpose of the financial sector, it is about allocating capital to its best uses. Finance is an intermediate good, like trucking. Unlike final goods, like housing, medical care, or food, it provides no direct benefit to society. This means that we want the financial sector to be as small as possible, while still being able to serve its purpose.

In fact, the financial sector has exploded relative to the size of the economy over the last half century. The narrow financial sector, commodities and securities trading, and investment banking have quintupled as a share of GDP since the 1970s. If the trucking sector has similarly expanded, all our economists would be complaining about our incredibly inefficient trucking sector. Yet, there seems little appreciation of the fact that finance is a huge source of both waste and inequality.  

The cost of running this bloated financial sector comes out of the pockets of the rest of us. It takes the forms of fees and commissions on trading stock and other assets, fees and penalties assessed by banks and other financial institutions, and fees assessed by private equity partners for managing the assets of pension funds and university endowments.

My favorite quick fix here is a financial transactions tax to downsize Wall Street. We could easily raise more than 0.5 percent of GDP ($60 billion a year) from such a tax, with the revenue coming almost entirely out the pocket of Wall Street. (To be clear, they will pass on the cost of the tax. They will lose because higher transactions costs will mean less trading, and therefore less revenue for Wall Street.)

We can hugely cut down on the fees earned by banks and bank-like companies both with better regulation and more competition. The best route for the latter would be for the Federal Reserve Board to offer digital accounts to every person and corporation in the country. This route is already being considered at the Fed. It would mean that we would no longer need accounts at traditional banks. We could have our paychecks and bills processed through the Fed at essentially zero cost.

Hedge fund and private equity partners justify their huge paychecks, which often run into the tens, or even hundreds, of millions by the claim that they are getting outsized returns for investors. It turns out that this is not true. In recent years, both hedge funds and private equity funds have typically underperformed the S&P 500. This means that their investors would have been better off just buying an S&P index fund than putting their money in private equity or hedge funds.

It is hard to pass laws that prevent investors from being stupid with their money but are things that can be done. In the case of public pension funds, we could have legislation requiring full disclosure of the terms of their contracts with private equity funds (and other investment managers), including the returns received. That way any reporter or interested person could look on the website and see how much money the state’s pension funds were paying some rich private equity types to lose the pension fund’s money.[2]

Universities have been losing large amounts of money paying hedge fund partners (overwhelmingly white males) to manage their endowments. Again, it would be hard to pass laws prohibiting Harvard, Yale, and the rest from throwing away their money, but if there were any progressive students or faculty on these campuses, they might be able to change the practice. After all, there is a reasonable case to be made that it is better to give money for financial aid to low- and moderate- income students than Wall Street types earning tens of millions a year.

 

Corporate Governance and Super-Rich CEOs

There has been a lot of discussion of the high pay that many CEOs have managed to pocket in the pandemic year. What is largely missing in the debate on CEO pay is that top executives are essentially ripping off the companies they work for.

Specifically, they do not contribute an amount to corporate bottom line that is commensurate with their pay. The implication is that companies can pay a CEO considerably less money, without concern that their profits would suffer. And, lower CEO pay would also mean pay cuts for the whole top tier of corporate executives. Lower pay for top tier corporate executives would also lead to lower pay for top management in the non-profit and university sector. In short, excessive pay for CEOs should be a big deal.

There is considerable evidence for the claim that CEOs don’t earn their pay. Some of it is cited in chapter six of Rigged. My own contribution to this literature was a paper with Jessica Schieder that looked at what happened to CEO pay in the health insurance industry after the ACA was passed. A provision of the law eliminated the deductibility of executive compensation (all compensation) in excess of $1 million. Since the nominal tax rate at the time was 35 percent, this change effectively raised the cost of CEO pay by more than 50 percent. If corporations were balancing pay CEO with their contribution to the company’s bottom line, this change should have unambiguously lowered pay. We tried a wide variety of specifications, controlling for revenue growth, profit growth, stock price appreciation, and other factors. In none of them was there any evidence that CEO pay in the health care industry fell relative to pay in other sectors.

If CEOs are ripping off the companies they work for, then shareholders should be allies in the effort to contain CEO pay. This seems an obvious conclusion, but there seems to be very little interest in policies that will increase the ability of shareholders to contain CEO pay. The labor market would look very different if CEOs earned 20 to 30 times the pay of the typical worker ($2 million to $3 million a year) rather than the current 200 to 300 times.  

My favorite mechanism for bringing CEO pay down to earth is to put some teeth into the “Say on Pay” shareholder votes on CEO pay. These votes were required to take place every three years by a provision in the Dodd-Frank financial reform package. As it stands now, there is no consequence when a package is voted down. (Less than 3.0 percent are turned down.)

Suppose that directors lost their stipend (typically around $200k a year) if a CEO pay package was voted down. This would give directors some real incentive to ask questions about whether they could pay their CEO less. Anyhow, there are many other mechanisms that would increase shareholders’ ability to reduce CEO pay, but this is the direction we should be thinking.

 

Section 230 and Special Immunity for Mark Zuckerberg

If the New York Times runs a defamatory ad, it can face a large lawsuit for libel. If Mark Zuckerberg runs the same ad on Facebook, he faces no legal liability. Only the person who paid for the ad can be sued. It’s not obvious why we should think that an Internet intermediary bears no liability for spreading defamatory claims but a print or broadcast outlet does.

Clearly far more material is carried over Internet outlets, but that is the choice of the outlets. That doesn’t seem like a good reason to allow them to profit from defaming someone.

The reason Mark Zuckerberg doesn’t face liability and the New York Times does is that Congress gives him and other Internet intermediaries special protection. Section 230 of the 1996 Communications Decency Act, protects Internet intermediaries from liability for third party content. This protects it from being sued for both the ads it carries and also the posts from individual account holders.

We don’t have to give Facebook this special protection. We could make Zuckerberg liable for ads in the same way that the New York Times and CNN are liable for ads that they carry. This means that he would have to scrutinize ads for defamatory material in the same way that traditional media outlets have this responsibility.

We can also make Facebook libel for defamatory posts just like the New York Times is libel for defamatory statements that appear in letters to the editor. It would of course be impossible for Facebook to screen every post in advance. We can structure the liability to take the form of a takedown requirement after notification, just as is down now with material that is alleged to infringe on copyrights.[3] This will undoubtedly add considerable costs for a company operating on Facebook’s business model, but so what?

I had a series of Twitter exchanges in the last couple of weeks in which several people argued that this sort of change in the law would just benefit Facebook at the expense of smaller competitors since its size would make it better able to absorb the added costs. I had argued for continuing to exempt common carriers, who don’t control content, but we can draw the line somewhat differently.

We can exempt any intermediary that does not either sell advertising or personal information. This would mean that any intermediary that either made its money on a subscription basis or was operated as a public service, would not face liability for third party content. That should provide a substantial advantage to Facebook’s competitors who choose to structure themselves in a way that they could benefit from this protection.     

 

If We Care About Inequality, Maybe We Should Stop Giving So Much Money to the Rich

At this point, I would usually give my tirade about how doctors make so much more money in the U.S. than in other wealthy countries because we protect them from competition, but this is enough for today. The point is that we have structured the market to redistribute an enormous amount of income upward.

I’m a big fan of progressive taxation, but the reality is that it is much easier to not give rich people so much money in the first place than to try to tax it back after the fact. It would be nice if more progressives paid some attention to the ways in which we give the rich money.   

[1] I outline an alternative funding mechanism in chapter 5 of Rigged (it’s free).

[2] We can also pass legislation that cracks down on the some of the abusive tactics, like surprise medical billing, that private equity pursues to try to boost returns.

[3] There is a concern that Facebook would be over-zealous in removing items that have been challenged, as has been the case with intermediaries responding to notifications of copyright violations. While this is possible, the penalties for copyright violations are far more severe than defamation. Copyright violations carry statutory penalties, so that even trivial infringements, that cost the copyright holder just a few dollars, can result in thousands of dollars of damages and legal expenses. There is nothing comparable with libel law.

I know I harp a lot on all the ways we structure the market to redistribute income upward, but that’s because we keep digging in deeper on these policies, and almost no one else talks about it. I get that it’s cool to talk about all sorts of tax and transfer schemes to redistribute some of the money we give to the rich and super-rich. But, I’m one of those old-fashion sorts who think it’s simpler just not to give them all the money in the first place. So, now that you have been warned, here again is my short list of ways to not give so much money to rich people.

Patent and Copyright Monopolies

The immediate issue that prompts this tirade was a request by President Biden for another $6.5 billion  (0.15 percent of the budget) in 2022 to support research into diseases like cancer, diabetes, and Alzheimer’s. I’m not upset at all that the federal government is spending more money on research in these areas.

In fact, I think more federal funding of research into these and other areas of biomedical research is great. The problem is that we can be all but certain that all the breakthroughs that may be realized as a result of this spending will result in patent monopolies that will be very profitable for the companies that are awarded them.

If this is too abstract for people, then think of Moderna, a company that saw its stock price increase more than 1000 percent since the pandemic began, creating more than $80 billion in stock wealth. Obviously, the main reason for this run-up was its Covid vaccine, which was developed almost entirely on the taxpayer’s dime. We can get angry that so many people became millionaires or billionaires on taxpayer funded research, but when we pay for the research and then give the company a patent monopoly, what else did we think would happen?

The alternative is to pay for the research and have it placed in the public domain. This means both, that all the findings are fully public so that other researchers can learn from them and build on them, and also that all patents are placed in the public domain. That means that anything developed can be produced as a cheap generic from the day it is approved by the FDA.

With respect to the vaccines, it is also worth mentioning that if we had gone the open-source route, we could have required that all the technology involved in the production process would also be freely shared. One of the problems with increasing production of the vaccines is that, even if we removed patent protection, most manufacturers would not have the necessary technical expertise to begin producing the vaccines immediately. However, if a condition of getting public funding was that this technology would be freely shared, then potential producers anywhere in the world would be able to get technical assistance in setting up their facilities.[1]

Another huge advantage of going the open-source route came up with the FDA’s decision to approve the Alzheimer’s drug, Aduhelm. In approving this drug, the FDA overruled the recommendation of its advisory panel, a step which it rarely takes. The panel argued that the evidence for the drug’s effectiveness was very weak, and there are serious side effects, which means that many patients may be made worse off by taking the drug.

Biogen, the maker of Aduhelm, announced that it would price the drug at $56,000 for a year’s dosage. With over 6 million people suffering from Alzheimer’s, this could mean tens of billions a year in revenue for Biogen, with most of it paid by the federal government through Medicare and Medicaid.

But even beyond the issue of the money, there is also the concern that the FDA’s decision may have been influenced by the lobbying efforts of Biogen. Many researchers get support from Biogen, and it’s hard to believe that their assessment of the drug is not affected by the money they receive. If we took the money out of the equation and were looking at a situation where Adulhelm was going to be produced as a cheap generic, there would be little reason for researchers not to give their honest assessment of the evidence of the drug’s safety and effectiveness. This is a reason that open-source research is likely to lead to better outcomes.

Having cheap drugs and vaccines would not only mean that some of the rich are less rich, but it also raises incomes for everyone who is not benefitting from patent monopolies. If we pay less for drugs, then the real value of everyone’s paycheck goes up. If we had less of a role for patent monopolies, not only for prescription drugs, but also for medical equipment, computers, software, and other technologies, the price of a large set of goods and services would fall sharply, hugely increasing real wages.

 

Downsizing Finance

The United States has a hugely bloated financial sector, which is responsible for many of the country’s great fortunes. This should not be a source of pride.

To restate the Econ 101 definition for the purpose of the financial sector, it is about allocating capital to its best uses. Finance is an intermediate good, like trucking. Unlike final goods, like housing, medical care, or food, it provides no direct benefit to society. This means that we want the financial sector to be as small as possible, while still being able to serve its purpose.

In fact, the financial sector has exploded relative to the size of the economy over the last half century. The narrow financial sector, commodities and securities trading, and investment banking have quintupled as a share of GDP since the 1970s. If the trucking sector has similarly expanded, all our economists would be complaining about our incredibly inefficient trucking sector. Yet, there seems little appreciation of the fact that finance is a huge source of both waste and inequality.  

The cost of running this bloated financial sector comes out of the pockets of the rest of us. It takes the forms of fees and commissions on trading stock and other assets, fees and penalties assessed by banks and other financial institutions, and fees assessed by private equity partners for managing the assets of pension funds and university endowments.

My favorite quick fix here is a financial transactions tax to downsize Wall Street. We could easily raise more than 0.5 percent of GDP ($60 billion a year) from such a tax, with the revenue coming almost entirely out the pocket of Wall Street. (To be clear, they will pass on the cost of the tax. They will lose because higher transactions costs will mean less trading, and therefore less revenue for Wall Street.)

We can hugely cut down on the fees earned by banks and bank-like companies both with better regulation and more competition. The best route for the latter would be for the Federal Reserve Board to offer digital accounts to every person and corporation in the country. This route is already being considered at the Fed. It would mean that we would no longer need accounts at traditional banks. We could have our paychecks and bills processed through the Fed at essentially zero cost.

Hedge fund and private equity partners justify their huge paychecks, which often run into the tens, or even hundreds, of millions by the claim that they are getting outsized returns for investors. It turns out that this is not true. In recent years, both hedge funds and private equity funds have typically underperformed the S&P 500. This means that their investors would have been better off just buying an S&P index fund than putting their money in private equity or hedge funds.

It is hard to pass laws that prevent investors from being stupid with their money but are things that can be done. In the case of public pension funds, we could have legislation requiring full disclosure of the terms of their contracts with private equity funds (and other investment managers), including the returns received. That way any reporter or interested person could look on the website and see how much money the state’s pension funds were paying some rich private equity types to lose the pension fund’s money.[2]

Universities have been losing large amounts of money paying hedge fund partners (overwhelmingly white males) to manage their endowments. Again, it would be hard to pass laws prohibiting Harvard, Yale, and the rest from throwing away their money, but if there were any progressive students or faculty on these campuses, they might be able to change the practice. After all, there is a reasonable case to be made that it is better to give money for financial aid to low- and moderate- income students than Wall Street types earning tens of millions a year.

 

Corporate Governance and Super-Rich CEOs

There has been a lot of discussion of the high pay that many CEOs have managed to pocket in the pandemic year. What is largely missing in the debate on CEO pay is that top executives are essentially ripping off the companies they work for.

Specifically, they do not contribute an amount to corporate bottom line that is commensurate with their pay. The implication is that companies can pay a CEO considerably less money, without concern that their profits would suffer. And, lower CEO pay would also mean pay cuts for the whole top tier of corporate executives. Lower pay for top tier corporate executives would also lead to lower pay for top management in the non-profit and university sector. In short, excessive pay for CEOs should be a big deal.

There is considerable evidence for the claim that CEOs don’t earn their pay. Some of it is cited in chapter six of Rigged. My own contribution to this literature was a paper with Jessica Schieder that looked at what happened to CEO pay in the health insurance industry after the ACA was passed. A provision of the law eliminated the deductibility of executive compensation (all compensation) in excess of $1 million. Since the nominal tax rate at the time was 35 percent, this change effectively raised the cost of CEO pay by more than 50 percent. If corporations were balancing pay CEO with their contribution to the company’s bottom line, this change should have unambiguously lowered pay. We tried a wide variety of specifications, controlling for revenue growth, profit growth, stock price appreciation, and other factors. In none of them was there any evidence that CEO pay in the health care industry fell relative to pay in other sectors.

If CEOs are ripping off the companies they work for, then shareholders should be allies in the effort to contain CEO pay. This seems an obvious conclusion, but there seems to be very little interest in policies that will increase the ability of shareholders to contain CEO pay. The labor market would look very different if CEOs earned 20 to 30 times the pay of the typical worker ($2 million to $3 million a year) rather than the current 200 to 300 times.  

My favorite mechanism for bringing CEO pay down to earth is to put some teeth into the “Say on Pay” shareholder votes on CEO pay. These votes were required to take place every three years by a provision in the Dodd-Frank financial reform package. As it stands now, there is no consequence when a package is voted down. (Less than 3.0 percent are turned down.)

Suppose that directors lost their stipend (typically around $200k a year) if a CEO pay package was voted down. This would give directors some real incentive to ask questions about whether they could pay their CEO less. Anyhow, there are many other mechanisms that would increase shareholders’ ability to reduce CEO pay, but this is the direction we should be thinking.

 

Section 230 and Special Immunity for Mark Zuckerberg

If the New York Times runs a defamatory ad, it can face a large lawsuit for libel. If Mark Zuckerberg runs the same ad on Facebook, he faces no legal liability. Only the person who paid for the ad can be sued. It’s not obvious why we should think that an Internet intermediary bears no liability for spreading defamatory claims but a print or broadcast outlet does.

Clearly far more material is carried over Internet outlets, but that is the choice of the outlets. That doesn’t seem like a good reason to allow them to profit from defaming someone.

The reason Mark Zuckerberg doesn’t face liability and the New York Times does is that Congress gives him and other Internet intermediaries special protection. Section 230 of the 1996 Communications Decency Act, protects Internet intermediaries from liability for third party content. This protects it from being sued for both the ads it carries and also the posts from individual account holders.

We don’t have to give Facebook this special protection. We could make Zuckerberg liable for ads in the same way that the New York Times and CNN are liable for ads that they carry. This means that he would have to scrutinize ads for defamatory material in the same way that traditional media outlets have this responsibility.

We can also make Facebook libel for defamatory posts just like the New York Times is libel for defamatory statements that appear in letters to the editor. It would of course be impossible for Facebook to screen every post in advance. We can structure the liability to take the form of a takedown requirement after notification, just as is down now with material that is alleged to infringe on copyrights.[3] This will undoubtedly add considerable costs for a company operating on Facebook’s business model, but so what?

I had a series of Twitter exchanges in the last couple of weeks in which several people argued that this sort of change in the law would just benefit Facebook at the expense of smaller competitors since its size would make it better able to absorb the added costs. I had argued for continuing to exempt common carriers, who don’t control content, but we can draw the line somewhat differently.

We can exempt any intermediary that does not either sell advertising or personal information. This would mean that any intermediary that either made its money on a subscription basis or was operated as a public service, would not face liability for third party content. That should provide a substantial advantage to Facebook’s competitors who choose to structure themselves in a way that they could benefit from this protection.     

 

If We Care About Inequality, Maybe We Should Stop Giving So Much Money to the Rich

At this point, I would usually give my tirade about how doctors make so much more money in the U.S. than in other wealthy countries because we protect them from competition, but this is enough for today. The point is that we have structured the market to redistribute an enormous amount of income upward.

I’m a big fan of progressive taxation, but the reality is that it is much easier to not give rich people so much money in the first place than to try to tax it back after the fact. It would be nice if more progressives paid some attention to the ways in which we give the rich money.   

[1] I outline an alternative funding mechanism in chapter 5 of Rigged (it’s free).

[2] We can also pass legislation that cracks down on the some of the abusive tactics, like surprise medical billing, that private equity pursues to try to boost returns.

[3] There is a concern that Facebook would be over-zealous in removing items that have been challenged, as has been the case with intermediaries responding to notifications of copyright violations. While this is possible, the penalties for copyright violations are far more severe than defamation. Copyright violations carry statutory penalties, so that even trivial infringements, that cost the copyright holder just a few dollars, can result in thousands of dollars of damages and legal expenses. There is nothing comparable with libel law.

You might think so from reading this NYT article. The piece tells readers that Indonesia had 20,000 positive cases on Thursday. Furthermore:

“. . . and the national percentage of positive Covid tests reached 14.6 percent this past week. By comparison, the weekly positivity rate in the United States is now 1.8 percent.”

The article also reports that a number of doctors and other health care workers have been infected, and several have died, even after getting two shots of Sinovac, one of the vaccines developed by China.

Unfortunately, the piece does not put any of this in a context that is likely to make it meaningful to most NYT readers. First, it would be helpful to point out that Indonesia’s population is 276 million, more than 83 percent of the size of the U.S. population. That means the 20,000 cases reported on Thursday would be equivalent to roughly 24,000 cases in the United States. The infection rate in the United States peaked in late January at more than 250,000 a day, a figure more than ten times as high, adjusted for population.

While the high positive rate on tests indicates a large number of infections are going undetected, the U.S. also had a much higher positive test when its infection rates were peaking. In mid-January the positivity rate was averaging over 13.0 percent, only slightly lower than the rate in Indonesia. 

It also is worth noting that the 20,000 cases reported on Thursday may have been an anomaly. The country reported 18,900 cases today and its seven day average is under 15,000.

Of course Indonesia is much poorer than the United States and is less able to deal with seriously ill Covid patients, but its lack of medical facilities and equipment is clearly the big problem, not the large number of cases.

Also, the report that some number of doctors and health care workers are getting sick, in spite of being vaccinated, is not inconsistent with the vaccine being effective, albeit considerably less effective that the mRNA vaccines developed in the United States and Europe. As the piece reports:

“While 90 percent of the vaccinated doctors who tested positive in Kudus were either asymptomatic or had very mild illness, according to Dr. Ahmad, an already stretched health care system has been pulled taut.”

The piece notes that only 5 percent of Indonesia’s population has been vaccinated. It would certainly be better if the country could get get access to the mRNA vaccines, but that is not an option at present. In this context, a Chinese vaccine that protects most people against serious illness, is the best available option. 

You might think so from reading this NYT article. The piece tells readers that Indonesia had 20,000 positive cases on Thursday. Furthermore:

“. . . and the national percentage of positive Covid tests reached 14.6 percent this past week. By comparison, the weekly positivity rate in the United States is now 1.8 percent.”

The article also reports that a number of doctors and other health care workers have been infected, and several have died, even after getting two shots of Sinovac, one of the vaccines developed by China.

Unfortunately, the piece does not put any of this in a context that is likely to make it meaningful to most NYT readers. First, it would be helpful to point out that Indonesia’s population is 276 million, more than 83 percent of the size of the U.S. population. That means the 20,000 cases reported on Thursday would be equivalent to roughly 24,000 cases in the United States. The infection rate in the United States peaked in late January at more than 250,000 a day, a figure more than ten times as high, adjusted for population.

While the high positive rate on tests indicates a large number of infections are going undetected, the U.S. also had a much higher positive test when its infection rates were peaking. In mid-January the positivity rate was averaging over 13.0 percent, only slightly lower than the rate in Indonesia. 

It also is worth noting that the 20,000 cases reported on Thursday may have been an anomaly. The country reported 18,900 cases today and its seven day average is under 15,000.

Of course Indonesia is much poorer than the United States and is less able to deal with seriously ill Covid patients, but its lack of medical facilities and equipment is clearly the big problem, not the large number of cases.

Also, the report that some number of doctors and health care workers are getting sick, in spite of being vaccinated, is not inconsistent with the vaccine being effective, albeit considerably less effective that the mRNA vaccines developed in the United States and Europe. As the piece reports:

“While 90 percent of the vaccinated doctors who tested positive in Kudus were either asymptomatic or had very mild illness, according to Dr. Ahmad, an already stretched health care system has been pulled taut.”

The piece notes that only 5 percent of Indonesia’s population has been vaccinated. It would certainly be better if the country could get get access to the mRNA vaccines, but that is not an option at present. In this context, a Chinese vaccine that protects most people against serious illness, is the best available option. 

Like most economists, I have always been a Bitcoin skeptic. The question has always been what purpose does it serve?

The idea that it would be a useful alternative currency is laughable on its face. How can you have a currency that wildly fluctuates year to year and even hour to hour?  Imagine if you had a wage or rent contract written in Bitcoin. Both your pay and your rent would have more than tripled over the last year, likely leaving you unemployed and unable to pay your unaffordable rent. Economists often exaggerate the problem of inflation, but having currency that has large and unpredictable increases and decreases in value is a real problem.

So, Bitcoin may not be very useful as a currency, but maybe we can just treat as an outlet for harmless speculation, like baseball cards or non-fungible tokens. Well, it turns out that Bitcoin is not entirely useless. It is the currency of choice for those engaging in illegal activities like dealing drugs and gun-running, and of course extorting companies with ransomware. (Its value for this purpose took a major hit when the FBI was able to retrieve much of the money paid by Colonial Pipeline to the hackers who infiltrated its system. Apparently, Bitcoin transactions are not as untraceable as advertised.)

But Bitcoin cannot be dismissed as just fun and illegal games, it turns out it is also a major contributor to global warming. Bitcoin mining, the process by which new bitcoins are brought into existence, uses up an enormous amount of electricity. According to an analysis by researchers at Cambridge, Bitcoin mining uses more energy in a year than the country of Argentina.

This means that a lot of greenhouse gases are being emitted for essentially nothing. Most greenhouse gas (GHG) emissions are due to things like heating and cooling our homes, transporting our food and our ourselves, and also producing our food. These are all real needs. We can find ways to emit less GHG, for example by traveling less or switching to an electric car that hopefully be fueled by clean energy, but these involve some sacrifice and/or some expense.

Doing with less Bitcoin should be easy by comparison. That is the logic of taxing Bitcoin transactions; we tax the items for which we want to see less.

The Benefits of the Tax

First and foremost, a tax on Bitcoin transactions would raise revenue. I would propose a substantial tax on transactions of 1.0 percent annually. This compares to the tax 0.1 percent on stock trades that has been put forward by Representative Peter DeFazio in the House and Senator Brian Schatz in the Senate.[1]

The reason for suggesting a higher tax on Bitcoin, is that there would be little consequence for the economy if the Bitcoin market were seriously disrupted. People engaged in ransomware attacks might see somewhat more volatility in the value of their payments, and may find it slightly more difficult to change them back into traditional currencies, but otherwise there would be little economic impact.

By contrast, even with all the speculative trading on financial markets, they do still serve a productive purpose, so we would want to be cautious about imposing a tax that could be destabilizing. As it is, a tax of 1.0 percent is hardly without precedent. The United Kingdom currently has a tax of 0.5 percent on stock trades. It had been 1.0 percent until 1986. Nonetheless, the UK had one of the largest stock exchanges in the world.

Clearly a 1.0 percent transactions tax on Bitcoin will not shut down the market. However, it will substantially reduce the volume of transactions. It also is likely to make the currency less attractive to anyone who doesn’t need it for illicit purposes, which will reduce its value. This should mean that people will devote fewer resources to mining Bitcoin, which is a real win for the world.

There is also the issue of how much revenue a Bitcoin tax would raise. Currently, trading volume is around $1 billion a day or $350 billion a year. A tax of 1.0 percent would get us $3.5 billion a year, if there were no decline in trading volume. But, of course, the whole point of the tax is to reduce trading volume and interest in Bitcoin. If we see volume cut in half, due to both less trading and a lower Bitcoin price, then we would raise $1.75 billion a year or $17.5 billion over the course of a ten-year budget horizon.

This is not huge money in terms of the whole budget. CEPR’s “It’s the Budget, Stupid” budget calculator tells us that is would be equal 0.03 percent of the total budget. That’s not a huge deal, but not altogether trivial. The annual take is equal to roughly 110,000 food stamp person years.

But there is another benefit of going the Bitcoin transaction tax route. We can experiment with enforcement mechanisms with little downside risk.

It is often claimed that financial transactions taxes are unenforceable. The evidence suggests otherwise. The UK raised an amount equal to 0.2 percent of GDP annually (roughly $44 billion in the U.S. economy) from its tax on stock trades. (Other financial assets are not subject to the tax.) There are many other countries in the world that raise substantial revenue from financial transactions tax.

We also have a modest financial transactions tax in the United States already. Stock trades are subject to a tax of 0.0042 percent. The tax raises roughly $500 million annually, which is supposed to finance the operation of the Securities and Exchange Commission.

Clearly financial transactions taxes are enforceable, but there are certainly many trades that escape taxation. Evasion is likely to be an even bigger problem with Bitcoin, where many of the transactions involve illegal activity.

This is why we have a great opportunity to innovate. In addition to the other mechanisms available for enforcement, we can also offer a reward to people turning in tax evaders. We can, for example, give them 20 percent of the tax collected from their lead.

To take an example, suppose someone trades $200 million in Bitcoin. With a 1.0 percent tax rate, they would owe $2 million. If they chose not to pay their taxes, and an employer reported this person to the I.R.S., they would stand to collect $400,000, which would be a pretty payday. This sort of reward system would give workers a strong incentive to report the tax evasion of their bosses.

A tax on Bitcoin transactions would be a great place to test run this sort of incentive. Since there is little reason to care if the Bitcoin market is disrupted, there is not really a downside. If the reward system proves effective in cracking down on evasion, we have a new tool that can be applied elsewhere if we choose to tax financial transactions. We also can see any problems that might appear in this system and make the necessary adjustments so that we are better prepared to implement a financial transactions tax to larger financial markets.

In short, the Bitcoin market gives a great laboratory for experimenting with financial transactions taxes. While there is enough experience both here and elsewhere in dealing with financial transactions taxes that we can be reasonably confident that a FTT can be implemented without great difficulty, until there is political will to put in place a broadly based FTT, we can use the Bitcoin market as a place to have a practice tax.    

[1] I have proposed a somewhat higher tax on stock trades of 0.2 percent.

Like most economists, I have always been a Bitcoin skeptic. The question has always been what purpose does it serve?

The idea that it would be a useful alternative currency is laughable on its face. How can you have a currency that wildly fluctuates year to year and even hour to hour?  Imagine if you had a wage or rent contract written in Bitcoin. Both your pay and your rent would have more than tripled over the last year, likely leaving you unemployed and unable to pay your unaffordable rent. Economists often exaggerate the problem of inflation, but having currency that has large and unpredictable increases and decreases in value is a real problem.

So, Bitcoin may not be very useful as a currency, but maybe we can just treat as an outlet for harmless speculation, like baseball cards or non-fungible tokens. Well, it turns out that Bitcoin is not entirely useless. It is the currency of choice for those engaging in illegal activities like dealing drugs and gun-running, and of course extorting companies with ransomware. (Its value for this purpose took a major hit when the FBI was able to retrieve much of the money paid by Colonial Pipeline to the hackers who infiltrated its system. Apparently, Bitcoin transactions are not as untraceable as advertised.)

But Bitcoin cannot be dismissed as just fun and illegal games, it turns out it is also a major contributor to global warming. Bitcoin mining, the process by which new bitcoins are brought into existence, uses up an enormous amount of electricity. According to an analysis by researchers at Cambridge, Bitcoin mining uses more energy in a year than the country of Argentina.

This means that a lot of greenhouse gases are being emitted for essentially nothing. Most greenhouse gas (GHG) emissions are due to things like heating and cooling our homes, transporting our food and our ourselves, and also producing our food. These are all real needs. We can find ways to emit less GHG, for example by traveling less or switching to an electric car that hopefully be fueled by clean energy, but these involve some sacrifice and/or some expense.

Doing with less Bitcoin should be easy by comparison. That is the logic of taxing Bitcoin transactions; we tax the items for which we want to see less.

The Benefits of the Tax

First and foremost, a tax on Bitcoin transactions would raise revenue. I would propose a substantial tax on transactions of 1.0 percent annually. This compares to the tax 0.1 percent on stock trades that has been put forward by Representative Peter DeFazio in the House and Senator Brian Schatz in the Senate.[1]

The reason for suggesting a higher tax on Bitcoin, is that there would be little consequence for the economy if the Bitcoin market were seriously disrupted. People engaged in ransomware attacks might see somewhat more volatility in the value of their payments, and may find it slightly more difficult to change them back into traditional currencies, but otherwise there would be little economic impact.

By contrast, even with all the speculative trading on financial markets, they do still serve a productive purpose, so we would want to be cautious about imposing a tax that could be destabilizing. As it is, a tax of 1.0 percent is hardly without precedent. The United Kingdom currently has a tax of 0.5 percent on stock trades. It had been 1.0 percent until 1986. Nonetheless, the UK had one of the largest stock exchanges in the world.

Clearly a 1.0 percent transactions tax on Bitcoin will not shut down the market. However, it will substantially reduce the volume of transactions. It also is likely to make the currency less attractive to anyone who doesn’t need it for illicit purposes, which will reduce its value. This should mean that people will devote fewer resources to mining Bitcoin, which is a real win for the world.

There is also the issue of how much revenue a Bitcoin tax would raise. Currently, trading volume is around $1 billion a day or $350 billion a year. A tax of 1.0 percent would get us $3.5 billion a year, if there were no decline in trading volume. But, of course, the whole point of the tax is to reduce trading volume and interest in Bitcoin. If we see volume cut in half, due to both less trading and a lower Bitcoin price, then we would raise $1.75 billion a year or $17.5 billion over the course of a ten-year budget horizon.

This is not huge money in terms of the whole budget. CEPR’s “It’s the Budget, Stupid” budget calculator tells us that is would be equal 0.03 percent of the total budget. That’s not a huge deal, but not altogether trivial. The annual take is equal to roughly 110,000 food stamp person years.

But there is another benefit of going the Bitcoin transaction tax route. We can experiment with enforcement mechanisms with little downside risk.

It is often claimed that financial transactions taxes are unenforceable. The evidence suggests otherwise. The UK raised an amount equal to 0.2 percent of GDP annually (roughly $44 billion in the U.S. economy) from its tax on stock trades. (Other financial assets are not subject to the tax.) There are many other countries in the world that raise substantial revenue from financial transactions tax.

We also have a modest financial transactions tax in the United States already. Stock trades are subject to a tax of 0.0042 percent. The tax raises roughly $500 million annually, which is supposed to finance the operation of the Securities and Exchange Commission.

Clearly financial transactions taxes are enforceable, but there are certainly many trades that escape taxation. Evasion is likely to be an even bigger problem with Bitcoin, where many of the transactions involve illegal activity.

This is why we have a great opportunity to innovate. In addition to the other mechanisms available for enforcement, we can also offer a reward to people turning in tax evaders. We can, for example, give them 20 percent of the tax collected from their lead.

To take an example, suppose someone trades $200 million in Bitcoin. With a 1.0 percent tax rate, they would owe $2 million. If they chose not to pay their taxes, and an employer reported this person to the I.R.S., they would stand to collect $400,000, which would be a pretty payday. This sort of reward system would give workers a strong incentive to report the tax evasion of their bosses.

A tax on Bitcoin transactions would be a great place to test run this sort of incentive. Since there is little reason to care if the Bitcoin market is disrupted, there is not really a downside. If the reward system proves effective in cracking down on evasion, we have a new tool that can be applied elsewhere if we choose to tax financial transactions. We also can see any problems that might appear in this system and make the necessary adjustments so that we are better prepared to implement a financial transactions tax to larger financial markets.

In short, the Bitcoin market gives a great laboratory for experimenting with financial transactions taxes. While there is enough experience both here and elsewhere in dealing with financial transactions taxes that we can be reasonably confident that a FTT can be implemented without great difficulty, until there is political will to put in place a broadly based FTT, we can use the Bitcoin market as a place to have a practice tax.    

[1] I have proposed a somewhat higher tax on stock trades of 0.2 percent.

It’s short and relatively painless.

It’s short and relatively painless.

An obvious point on the pace of job growth, that I had not paid attention to in recent months, is that it is easier to recall someone from a layoff than hire a new worker. (That’s one reason that some of us thought the paycheck protection program was a good idea and that work sharing should be promoted as widely as possible.) Anyhow, as the economy has mostly reopened from the pandemic shutdowns, the number of unemployed workers who report being on temporary layoffs has fallen sharply. Here’s the picture.

Source: Bureau of Labor Statistics.

As can be seen the number of people reporting that they were on layoffs soared during the shutdown last spring. It peaked at over 18 million in April of last year and then fell sharply through the summer. It was down to 4.6 million by September. It continued to fall through the rest of the year, but it was still at 2.7 million in January. Since then it has dropped by roughly one-third to just over 1.8 million in May.

This is still considerably higher than what we would see in a healthy economy, the figure had been around 1.0 million in 2014 and 2015, and had been under 800,000 just before the pandemic hit. But the figure is not extraordinary for a recession. The number of people reporting they were on temporary layoffs in the Great Recession peaked at just over 1.9 million in September of 2009.

This matters in terms of the pace of job growth that we can reasonably expect going forward. When hiring just means calling back a worker on layoff, we can expect companies to do much more of it than when it means seeking out new workers. This is why it was possible get job gains well in excess of 1 million a month last summer. But now that the number of people on temporary layoffs has fallen sharply, it is highly unlikely that we will see job gains over, or even near, 1 million a month.

That doesn’t mean we still can’t see very strong job growth. But, in this context, we are probably talking more like 600,000 or 700,000 a month, a bit more than what we saw in May, well below the pace of last summer. That means it will be somewhat longer until we can get to something resembling full employment, but we can be getting pretty close by next spring.

An obvious point on the pace of job growth, that I had not paid attention to in recent months, is that it is easier to recall someone from a layoff than hire a new worker. (That’s one reason that some of us thought the paycheck protection program was a good idea and that work sharing should be promoted as widely as possible.) Anyhow, as the economy has mostly reopened from the pandemic shutdowns, the number of unemployed workers who report being on temporary layoffs has fallen sharply. Here’s the picture.

Source: Bureau of Labor Statistics.

As can be seen the number of people reporting that they were on layoffs soared during the shutdown last spring. It peaked at over 18 million in April of last year and then fell sharply through the summer. It was down to 4.6 million by September. It continued to fall through the rest of the year, but it was still at 2.7 million in January. Since then it has dropped by roughly one-third to just over 1.8 million in May.

This is still considerably higher than what we would see in a healthy economy, the figure had been around 1.0 million in 2014 and 2015, and had been under 800,000 just before the pandemic hit. But the figure is not extraordinary for a recession. The number of people reporting they were on temporary layoffs in the Great Recession peaked at just over 1.9 million in September of 2009.

This matters in terms of the pace of job growth that we can reasonably expect going forward. When hiring just means calling back a worker on layoff, we can expect companies to do much more of it than when it means seeking out new workers. This is why it was possible get job gains well in excess of 1 million a month last summer. But now that the number of people on temporary layoffs has fallen sharply, it is highly unlikely that we will see job gains over, or even near, 1 million a month.

That doesn’t mean we still can’t see very strong job growth. But, in this context, we are probably talking more like 600,000 or 700,000 a month, a bit more than what we saw in May, well below the pace of last summer. That means it will be somewhat longer until we can get to something resembling full employment, but we can be getting pretty close by next spring.

After Donald Trump’s clown shows, it was nice to have a U.S. president who at least takes world issues seriously while representing the country at the various summits over the last week. But that is a low bar. While we want adults in positions of responsibility, we have to ask where those adults want to take us. It is not clear that we should all eagerly follow the path that President Biden seems to be outlining with regard to China.

Unfortunately, people in the United States (including reporter-type people) tend to have little knowledge of history. Many have no first-hand knowledge of the Cold War with the Soviet Union and have not done much reading to make up for this deficiency. In fact, they also don’t seem to have much knowledge of the Iraq War, which is probably the better place to start here.

 

Target Iraq: Bad Guy Saddam Hussein

When President George W. Bush fixed his eyes on overthrowing Saddam Hussein in the summer of 2002, he decided the rationale was going to be that Hussein possessed or was developing nuclear weapons. This complaint came in spite of the fact that UN weapons inspectors had been in place in the country ever since its defeat in the first Iraq war in 1991.   

The weapons inspectors insisted that they saw no evidence that Iraq was developing nuclear weapons. The inspectors’ assessment was dismissed by the Bush administration and to a large extent by major news outlets. They claimed that the restrictions that Iraq imposed on inspections, usually ones of timing, made it impossible for the inspectors to get an accurate assessment of the country’s nuclear capabilities.

The Bush administration then went about whipping up its own “intelligence,” supporting the administration’s claim that Iraq was far along in developing a nuclear bomb. Much of its case was complete fabrication, while other parts were very selective presentations of evidence. But they did manage to sell most of the media and the public on the threat of Iraq’s nuclear weapons.

However, the Bush administration also had a fallback to assuage many liberal types who had qualms about overthrowing a foreign government. The fallback was that Saddam Hussein was a really bad guy.

They had a very good case on this one. Hussein routinely imprisoned or executed political opponents or critics. He had invaded two of his neighbors (Iran in 1980 and Kuwait in 1990) and he persecuted domestic minorities within Iraq, most notably the Kurds and Shite population.

No one could seriously want to defend Hussein’s practices as the ruler of Iraq, but that didn’t mean that overthrowing him was a good policy. It may still be too early to pass a final judgement, and we can never know a counterfactual. At this point, it would be difficult to claim that things have changed for the better for the people of Iraq and the region as a result of the U.S. invasion.

Anyhow, the Hussein as bad guy story is important for our current policy toward China. We can point to the country’s repressive measures against internal dissidents. We can also point to the repression directed towards its Uighur population in Western China, as well as, its belligerence towards its neighbors in making claims on territorial waters. These and other actions can be used to show that China is far from a model democracy that respects the rights of its own citizens, as well as, international law.

But this issue is really beside the point. The question from the standpoint of U.S. policy is how any of our actions can be expected to improve the situation. Specifically, if we adopt a confrontational stance towards China, involving economic measures and a beefed-up military presence in the region, is there reason to believe that the country will improve its behavior in the areas that we care about?

My guess is that the answer is no. Perhaps those with more expertise on China can make a strong case that China’s government would change its behavior in response to a more confrontational approach by the United States, but that really shouldn’t be the issue. It doesn’t make sense to have confrontation as a feel-good approach.     

Unfortunately, that seems to be the current path. It is also worth noting in this respect that China was hardly a model of human rights and democracy when the Clinton administration pushed to have it admitted to the WTO at the end of the 1990s. At the time, anyone who raised human rights and labor issues as a reason not to further open trade with China was denounced as a Neanderthal protectionist. We were told that somehow, by buying clothes shoes and other items produced with low cost Chinese labor, we would turn the country into a liberal democracy. Guess that claim is no longer operative.[1]

 

Using the Cold War to Justify Otherwise Unjustifiable Policies

In the days of the first Cold War the U.S.  government pursued many policies, both foreign and domestic, that would be hard to justify without the threat of the Soviet Union. On the domestic side, we had a range of policies by both the government and private companies, to crack down on alleged communists and Soviet sympathizers.

These included loyalty oaths where people had to swear that they were not members of the Communist Party to get government jobs. This often kept not only people who were actual party members from getting jobs, but also people who sympathized with many of the party’s stated goals, like promoting civil rights and avoiding nuclear war.

The 1947 Taft-Hartley Act required unions to force all officers to sign affidavits saying that they were not communists in order to be eligible for recognition through a National Labor Relations Board certified election. Many of the most committed labor organizers were in fact members of the Communist Party, so they were thrown out of labor movement if they refused to sign this pledge. In other cases, committed organizers refused to go along with this demand even if they were not themselves actually party members. On the private side, we had the Hollywood blacklist, where a large number of screenwriters and actors were prevented from working for much of their career.

Internationally, the United States had numerous interventions around the world that had little or nothing to do with combatting the Soviet Union. Just to take two prominent ones: we overthrew the democratically elected government in Iran in 1953 and installed a brutal dictatorship. The issue was not communism or the Soviet Union. The issue was that our oil companies wanted access to Iranian oil.

In another case, the U.S. overthrew an elected government in Guatemala in 1954. Again, this had nothing really to do with the Soviet Union, the United Fruit Company was unhappy about its banana plantations being taken in a land reform program.

The list of interventions could be extended at great length, but the point is that the U.S. government used the Soviet threat to justify policies designed to serve powerful corporate interests that would be very difficult to rationalize without this threat. In addition, we spent enormous sums on the military, which meant large profits for military contractors.

A New Cold War against China could be used in the same way. Needless to say, we can justify pretty much endless military spending based on the need to meet the China threat. Many people don’t seem to realize the absurdity of trying to spend China into the ground, as some would claim we did with the Soviet Union. While the Soviet Union’s economy peaked at around half of the size of the U.S. economy, China’s economy is already almost 20 percent larger than the U.S. economy, and will be around 80 percent larger by the end of the decade.     

If the goal of arms race is to spend China into the ground, it is more likely we would spend ourselves into the ground. The burden of a major arms buildup would be much greater on the U.S. than China, although just like in the first Cold War, it would make lots of military contractors rich.

 

The Implications of the New Cold War for Domestic Policies

There are other aspects to the prospect of Cold War-type competition that are equally pernicious. Last week the Senate passed a bill that would provide $250 billion over the next decade in research spending, ostensibly to help us compete with China. (The $25 billion in annual spending comes to 0.4 percent of the total budget, which you can find out quickly with CEPR’s “It’s the Budget, Stupid” budget calculator.)

The idea of boosting public spending on R&D is a good one, but we need to ask some serious questions about who gets the benefits. Operation Warp Speed gave us a great model for the benefits of public spending, while at the same time showing us the potential for skewing of the gains.

Moderna probably shows the issue most clearly. The federal government fully funded the development and testing of its vaccine. Yet, it gave the company a patent monopoly, which allows it to restrict the distribution of the vaccine and charge prices far above the free market price. As result, Moderna’s stockholders and its top executives have made billions of dollars, effectively profiting off of the government’s investment.

We could structure public contracts differently. For example, we could require that all innovations derived from government research be placed in the public domain so that anyone could manufacture them who possessed the necessary expertise. In some cases, this could involve going deeper downstream in the development process than is intended in the bill approved by the Senate, but there is no reason that the funding could not be used to cover the full costs of developing a product, as was the case with Moderna.[2]

Unfortunately, this bill looks like the funding will follow the model of Moderna. The government puts up the money and takes the risk, while private corporations will be able to gain patent and copyright monopolies, which will allow them to garner a disproportionate share of the gains. In a context where we are supposed to be concerned about the distribution of income, this looks like a huge step in the wrong direction.

Some people have supported this sort of investment with the idea that it will bring manufacturing jobs back to the United States and therefore reduce inequality. Unfortunately, this is a view that has not kept pace with the data. Historically, manufacturing has been a source of good paying jobs for workers without college degrees. However, the wage premium in manufacturing has largely disappeared over the last three decades.

To take a very simple measure, the average hourly wage for production non-supervisory workers in manufacturing was 5.7 percent above the average for the private sector as a whole in 1990. In the most recent data (May, 2021), the wage for production workers in manufacturing was 8.1 percent lower than in the private sector as a whole. This comparison is incomplete, since it doesn’t capture the value of benefits, which tend to be higher in manufacturing, nor does it control for education, experience, and other factors, but it is clear that the premium is substantially smaller than it had been in prior decades.[3]

The reason for the deterioration in the quality of manufacturing jobs is not a secret. The unionization rate in manufacturing has plummeted, largely due to trade, as well as aggressive anti-union measures by employers. In 1990, more than 20 percent of workers in manufacturing were unionized. In 2020, just 8.5 percent of workers in manufacturing were unionized. That is only slightly higher than the 6.3 percent average for the private sector as a whole.

Furthermore, more manufacturing jobs has not meant more union jobs. Until the pandemic hit in March, we had added back more than 1.6 million manufacturing jobs from the Great Recession trough in 2010. Nonetheless, the number of union members in manufacturing had fallen by almost 900,000. As we added back jobs in the sector, they were overwhelmingly lower paying non-union jobs.

The Biden administration hopes to change this story by pressing government contractors to be neutral on workers’ decision to unionize. Hopefully this effort will prove successful, but it would have the same benefit for workers if employers in other areas, like health care, transportation, and warehousing could be pressured to be neutral in organizing drives.

The historic link between manufacturing jobs and unions has largely disappeared, and there is not an obvious reason to put any special effort into bringing it back. We want jobs to be union jobs, in every sector of the economy. When manufacturing disproportionately had union jobs, increasing manufacturing jobs might have meant increasing union jobs. This is no longer true.

In this respect it is also worth noting that manufacturing jobs continue to be overwhelmingly male. There is no obvious reason that we should focus on improving the quality of the jobs held by men, while neglecting jobs held disproportionately by women. The idea that a Cold War stance to China will be a big positive for the working class as a whole is simply wrong.

The Cooperative Alternative

As I have argued in the past, we should look to cooperate with China in the areas where it will provide both countries with clear benefits. The most obvious areas for such cooperation are health care and climate change. Both countries, and in fact the whole world, would benefit from the sharing of technology in these areas. We would all benefit from having new technology in health care and clean energy distributed as quickly and widely as possible.

This cooperation should mean open-source research where all findings are fully open. This would both allow for the most rapid possible progress and also have an equalizing effect on income distribution. Top researchers should be well-paid, but there is no reason to believe that they need to be motivated by payoffs in the tens or hundreds of millions, or even billions, of dollars.

Instead of furthering the upward redistribution of the last four decades, open-source research in major areas of the economy would likely redistribute downward. If the price of patent-copyright protected items fell to the free market price, it would effectively raise the real wages of workers.

To take the most important example, we are currently spending over $500 billion a year on prescription drugs. If these drugs sold in a free market without patents or related protections, they would likely cost us less than $100 billion. The savings of $400 billion comes to roughly $3,000 a year for every household in the country. (The actual savings would be somewhat less since we would likely have to increase public funding of research by $50 to $100 billion a year.) There would also be huge savings on medical equipment and a wide variety of other areas where public funding was substituted for patent monopoly funding.  

A policy that focuses on cooperating with China, where we can, is likely to produce the best results from both a foreign policy and domestic economic perspective. Our resources will be far better used on fighting climate change and disease than on trying to intimidate China militarily. And, if we adopt policies that almost seemed as though they are designed to redistribute income upwards, we should not be surprised that we end up with more inequality.

Unlike Trump, President Biden is a serious person, but he also can be seriously wrong. Putting us on a path towards a new Cold War with China would be a disastrous mistake. We should do everything possible to keep Biden from going this route.  

[1] For the young ones out there, “no longer operative,” was the line that Richard Nixon’s press secretary, Ronald Zeigler, used to refer to all the claims he had made proclaiming Nixon’s innocence in the Watergate coverup after the release of White House tape recordings showing that Nixon was in the middle of the coverup from the beginning.

[2] I outline a mechanism for doing this in chapter 5 of Rigged [it’s free].

[3] Larry Mishel found a 7.8 percent wage premium for non-college educated workers for the years 2010-2016 in an analysis that controlled for age, gender, education and other factors. This compares to a premium of 16.7 percent for college-educated workers. The premium would be somewhat higher if non-wage compensation was included. However, since the average hourly wage for production non-supervisory workers in manufacturing has been falling relative to the average hourly wage in the private sector as a whole, the premium would almost certainly have to be considerably smaller in 2021 than the average for 2010-2016. It is also likely that the gap in benefits has fallen, as non-unionized workers in manufacturing are less likely to have health care insurance and pensions than unionized workers.

After Donald Trump’s clown shows, it was nice to have a U.S. president who at least takes world issues seriously while representing the country at the various summits over the last week. But that is a low bar. While we want adults in positions of responsibility, we have to ask where those adults want to take us. It is not clear that we should all eagerly follow the path that President Biden seems to be outlining with regard to China.

Unfortunately, people in the United States (including reporter-type people) tend to have little knowledge of history. Many have no first-hand knowledge of the Cold War with the Soviet Union and have not done much reading to make up for this deficiency. In fact, they also don’t seem to have much knowledge of the Iraq War, which is probably the better place to start here.

 

Target Iraq: Bad Guy Saddam Hussein

When President George W. Bush fixed his eyes on overthrowing Saddam Hussein in the summer of 2002, he decided the rationale was going to be that Hussein possessed or was developing nuclear weapons. This complaint came in spite of the fact that UN weapons inspectors had been in place in the country ever since its defeat in the first Iraq war in 1991.   

The weapons inspectors insisted that they saw no evidence that Iraq was developing nuclear weapons. The inspectors’ assessment was dismissed by the Bush administration and to a large extent by major news outlets. They claimed that the restrictions that Iraq imposed on inspections, usually ones of timing, made it impossible for the inspectors to get an accurate assessment of the country’s nuclear capabilities.

The Bush administration then went about whipping up its own “intelligence,” supporting the administration’s claim that Iraq was far along in developing a nuclear bomb. Much of its case was complete fabrication, while other parts were very selective presentations of evidence. But they did manage to sell most of the media and the public on the threat of Iraq’s nuclear weapons.

However, the Bush administration also had a fallback to assuage many liberal types who had qualms about overthrowing a foreign government. The fallback was that Saddam Hussein was a really bad guy.

They had a very good case on this one. Hussein routinely imprisoned or executed political opponents or critics. He had invaded two of his neighbors (Iran in 1980 and Kuwait in 1990) and he persecuted domestic minorities within Iraq, most notably the Kurds and Shite population.

No one could seriously want to defend Hussein’s practices as the ruler of Iraq, but that didn’t mean that overthrowing him was a good policy. It may still be too early to pass a final judgement, and we can never know a counterfactual. At this point, it would be difficult to claim that things have changed for the better for the people of Iraq and the region as a result of the U.S. invasion.

Anyhow, the Hussein as bad guy story is important for our current policy toward China. We can point to the country’s repressive measures against internal dissidents. We can also point to the repression directed towards its Uighur population in Western China, as well as, its belligerence towards its neighbors in making claims on territorial waters. These and other actions can be used to show that China is far from a model democracy that respects the rights of its own citizens, as well as, international law.

But this issue is really beside the point. The question from the standpoint of U.S. policy is how any of our actions can be expected to improve the situation. Specifically, if we adopt a confrontational stance towards China, involving economic measures and a beefed-up military presence in the region, is there reason to believe that the country will improve its behavior in the areas that we care about?

My guess is that the answer is no. Perhaps those with more expertise on China can make a strong case that China’s government would change its behavior in response to a more confrontational approach by the United States, but that really shouldn’t be the issue. It doesn’t make sense to have confrontation as a feel-good approach.     

Unfortunately, that seems to be the current path. It is also worth noting in this respect that China was hardly a model of human rights and democracy when the Clinton administration pushed to have it admitted to the WTO at the end of the 1990s. At the time, anyone who raised human rights and labor issues as a reason not to further open trade with China was denounced as a Neanderthal protectionist. We were told that somehow, by buying clothes shoes and other items produced with low cost Chinese labor, we would turn the country into a liberal democracy. Guess that claim is no longer operative.[1]

 

Using the Cold War to Justify Otherwise Unjustifiable Policies

In the days of the first Cold War the U.S.  government pursued many policies, both foreign and domestic, that would be hard to justify without the threat of the Soviet Union. On the domestic side, we had a range of policies by both the government and private companies, to crack down on alleged communists and Soviet sympathizers.

These included loyalty oaths where people had to swear that they were not members of the Communist Party to get government jobs. This often kept not only people who were actual party members from getting jobs, but also people who sympathized with many of the party’s stated goals, like promoting civil rights and avoiding nuclear war.

The 1947 Taft-Hartley Act required unions to force all officers to sign affidavits saying that they were not communists in order to be eligible for recognition through a National Labor Relations Board certified election. Many of the most committed labor organizers were in fact members of the Communist Party, so they were thrown out of labor movement if they refused to sign this pledge. In other cases, committed organizers refused to go along with this demand even if they were not themselves actually party members. On the private side, we had the Hollywood blacklist, where a large number of screenwriters and actors were prevented from working for much of their career.

Internationally, the United States had numerous interventions around the world that had little or nothing to do with combatting the Soviet Union. Just to take two prominent ones: we overthrew the democratically elected government in Iran in 1953 and installed a brutal dictatorship. The issue was not communism or the Soviet Union. The issue was that our oil companies wanted access to Iranian oil.

In another case, the U.S. overthrew an elected government in Guatemala in 1954. Again, this had nothing really to do with the Soviet Union, the United Fruit Company was unhappy about its banana plantations being taken in a land reform program.

The list of interventions could be extended at great length, but the point is that the U.S. government used the Soviet threat to justify policies designed to serve powerful corporate interests that would be very difficult to rationalize without this threat. In addition, we spent enormous sums on the military, which meant large profits for military contractors.

A New Cold War against China could be used in the same way. Needless to say, we can justify pretty much endless military spending based on the need to meet the China threat. Many people don’t seem to realize the absurdity of trying to spend China into the ground, as some would claim we did with the Soviet Union. While the Soviet Union’s economy peaked at around half of the size of the U.S. economy, China’s economy is already almost 20 percent larger than the U.S. economy, and will be around 80 percent larger by the end of the decade.     

If the goal of arms race is to spend China into the ground, it is more likely we would spend ourselves into the ground. The burden of a major arms buildup would be much greater on the U.S. than China, although just like in the first Cold War, it would make lots of military contractors rich.

 

The Implications of the New Cold War for Domestic Policies

There are other aspects to the prospect of Cold War-type competition that are equally pernicious. Last week the Senate passed a bill that would provide $250 billion over the next decade in research spending, ostensibly to help us compete with China. (The $25 billion in annual spending comes to 0.4 percent of the total budget, which you can find out quickly with CEPR’s “It’s the Budget, Stupid” budget calculator.)

The idea of boosting public spending on R&D is a good one, but we need to ask some serious questions about who gets the benefits. Operation Warp Speed gave us a great model for the benefits of public spending, while at the same time showing us the potential for skewing of the gains.

Moderna probably shows the issue most clearly. The federal government fully funded the development and testing of its vaccine. Yet, it gave the company a patent monopoly, which allows it to restrict the distribution of the vaccine and charge prices far above the free market price. As result, Moderna’s stockholders and its top executives have made billions of dollars, effectively profiting off of the government’s investment.

We could structure public contracts differently. For example, we could require that all innovations derived from government research be placed in the public domain so that anyone could manufacture them who possessed the necessary expertise. In some cases, this could involve going deeper downstream in the development process than is intended in the bill approved by the Senate, but there is no reason that the funding could not be used to cover the full costs of developing a product, as was the case with Moderna.[2]

Unfortunately, this bill looks like the funding will follow the model of Moderna. The government puts up the money and takes the risk, while private corporations will be able to gain patent and copyright monopolies, which will allow them to garner a disproportionate share of the gains. In a context where we are supposed to be concerned about the distribution of income, this looks like a huge step in the wrong direction.

Some people have supported this sort of investment with the idea that it will bring manufacturing jobs back to the United States and therefore reduce inequality. Unfortunately, this is a view that has not kept pace with the data. Historically, manufacturing has been a source of good paying jobs for workers without college degrees. However, the wage premium in manufacturing has largely disappeared over the last three decades.

To take a very simple measure, the average hourly wage for production non-supervisory workers in manufacturing was 5.7 percent above the average for the private sector as a whole in 1990. In the most recent data (May, 2021), the wage for production workers in manufacturing was 8.1 percent lower than in the private sector as a whole. This comparison is incomplete, since it doesn’t capture the value of benefits, which tend to be higher in manufacturing, nor does it control for education, experience, and other factors, but it is clear that the premium is substantially smaller than it had been in prior decades.[3]

The reason for the deterioration in the quality of manufacturing jobs is not a secret. The unionization rate in manufacturing has plummeted, largely due to trade, as well as aggressive anti-union measures by employers. In 1990, more than 20 percent of workers in manufacturing were unionized. In 2020, just 8.5 percent of workers in manufacturing were unionized. That is only slightly higher than the 6.3 percent average for the private sector as a whole.

Furthermore, more manufacturing jobs has not meant more union jobs. Until the pandemic hit in March, we had added back more than 1.6 million manufacturing jobs from the Great Recession trough in 2010. Nonetheless, the number of union members in manufacturing had fallen by almost 900,000. As we added back jobs in the sector, they were overwhelmingly lower paying non-union jobs.

The Biden administration hopes to change this story by pressing government contractors to be neutral on workers’ decision to unionize. Hopefully this effort will prove successful, but it would have the same benefit for workers if employers in other areas, like health care, transportation, and warehousing could be pressured to be neutral in organizing drives.

The historic link between manufacturing jobs and unions has largely disappeared, and there is not an obvious reason to put any special effort into bringing it back. We want jobs to be union jobs, in every sector of the economy. When manufacturing disproportionately had union jobs, increasing manufacturing jobs might have meant increasing union jobs. This is no longer true.

In this respect it is also worth noting that manufacturing jobs continue to be overwhelmingly male. There is no obvious reason that we should focus on improving the quality of the jobs held by men, while neglecting jobs held disproportionately by women. The idea that a Cold War stance to China will be a big positive for the working class as a whole is simply wrong.

The Cooperative Alternative

As I have argued in the past, we should look to cooperate with China in the areas where it will provide both countries with clear benefits. The most obvious areas for such cooperation are health care and climate change. Both countries, and in fact the whole world, would benefit from the sharing of technology in these areas. We would all benefit from having new technology in health care and clean energy distributed as quickly and widely as possible.

This cooperation should mean open-source research where all findings are fully open. This would both allow for the most rapid possible progress and also have an equalizing effect on income distribution. Top researchers should be well-paid, but there is no reason to believe that they need to be motivated by payoffs in the tens or hundreds of millions, or even billions, of dollars.

Instead of furthering the upward redistribution of the last four decades, open-source research in major areas of the economy would likely redistribute downward. If the price of patent-copyright protected items fell to the free market price, it would effectively raise the real wages of workers.

To take the most important example, we are currently spending over $500 billion a year on prescription drugs. If these drugs sold in a free market without patents or related protections, they would likely cost us less than $100 billion. The savings of $400 billion comes to roughly $3,000 a year for every household in the country. (The actual savings would be somewhat less since we would likely have to increase public funding of research by $50 to $100 billion a year.) There would also be huge savings on medical equipment and a wide variety of other areas where public funding was substituted for patent monopoly funding.  

A policy that focuses on cooperating with China, where we can, is likely to produce the best results from both a foreign policy and domestic economic perspective. Our resources will be far better used on fighting climate change and disease than on trying to intimidate China militarily. And, if we adopt policies that almost seemed as though they are designed to redistribute income upwards, we should not be surprised that we end up with more inequality.

Unlike Trump, President Biden is a serious person, but he also can be seriously wrong. Putting us on a path towards a new Cold War with China would be a disastrous mistake. We should do everything possible to keep Biden from going this route.  

[1] For the young ones out there, “no longer operative,” was the line that Richard Nixon’s press secretary, Ronald Zeigler, used to refer to all the claims he had made proclaiming Nixon’s innocence in the Watergate coverup after the release of White House tape recordings showing that Nixon was in the middle of the coverup from the beginning.

[2] I outline a mechanism for doing this in chapter 5 of Rigged [it’s free].

[3] Larry Mishel found a 7.8 percent wage premium for non-college educated workers for the years 2010-2016 in an analysis that controlled for age, gender, education and other factors. This compares to a premium of 16.7 percent for college-educated workers. The premium would be somewhat higher if non-wage compensation was included. However, since the average hourly wage for production non-supervisory workers in manufacturing has been falling relative to the average hourly wage in the private sector as a whole, the premium would almost certainly have to be considerably smaller in 2021 than the average for 2010-2016. It is also likely that the gap in benefits has fallen, as non-unionized workers in manufacturing are less likely to have health care insurance and pensions than unionized workers.

In a New York Times column today, William Cohan, a writer and former investment banker, warned of impending disaster if the Federal Reserve Board does not quickly move away from its low interest rate policy. Cohan tells readers:

“But many people wonder if Jerome Powell, the chairman of the Fed, has reckoned with the power of the easy-money monster the central bank spawned all those years ago. They worry that Mr. Powell has helped inflate bubbles in housing, lumber, copper, Bitcoin and stocks, bonds and other assets. The evidence is mounting: The Consumer Price Index, a gauge of inflation, rose 5 percent in May from a severely depressed number a year earlier — the fastest rate in nearly 13 years. And that’s just one worrisome indicator.”

The piece goes on to warn of all the horrible things to come if Powell does not soon start to raise interest rates. The basic story is that we have bubbles in many markets that will collapse, costing investors in these bubbles hundreds of billions, or even trillions, of dollars. [It is worth noting that lumber prices have been plummeting in the last few weeks.]

If this sounds familiar, that might be because Mr.  Cohan had a very similar column in the NYT a couple of years ago. In August of 2019, Cohan warned that “only the Fed can save us now,” and urged Fed Chair Jerome Powell to stand up to Trump and raise interest rates. The piece begins:

“Here’s the moment I realized the next financial crisis is inevitable.”

We find out that this was the moment when he realized he was listening to a speech where Powell indicated that he had no plans to raise interest rates.

Later Cohan explains:

“But although a sense of euphoria spread through the room, as well as through debt and equity markets, I was overcome by a sense of dread. A decade of historically low interest rates has begun to warp our economy. As we learned to our collective horror during the 2008 financial crisis, a period of sustained low interest rates forces investors on a desperate search for higher yields, inflating asset prices and the risks of owning loans and debt of all kinds.”

To  be clear, there are undoubtedly many bubbles in the U.S. economy right now. Bitcoin is the most obvious example, but we also have the proliferation of non-fungible tokens, as well as many stock prices that bear no relationship to plausible estimates of future earnings.

But none of this sets the stage for a 2008-09 disaster. In the years leading up to the Great Recession, the housing bubble was driving the economy. This was easy to see for anyone who bothered to look at the GDP data the Commerce Department publishes every quarter. Residential construction had increased from its normal rate, which is around 3.5 percent of GDP, to a peak of 6.8 percent of GDP. Similarly, soaring house prices led to a consumption boom, as people spent based on the bubble created equity in their home.

When the bubble collapsed, and housing construction fell to less than 2.0 percent of GDP due to overbuilding during the bubble. There was nothing to replace the loss of 4.8 percentage points of GDP or annual demand (more than $1 trillion in today’s economy). Similarly, the bubble driven consumption boom collapsed, costing us another 3-4 percentage points in demand.

This was the story of the Great Recession. The financial crisis was just the market working its magic on the banks and other financial institutions that had been reckless in issuing and marketing loans.

If we were to see a similar collapse in asset prices today, it would have no comparable impact on the real economy. If the roughly $1 trillion market capitalization in the digital currency market went to zero tomorrow, how would that affect the real economy? Some Bitcoin millionaires and billionaires would be very unhappy, but so what? There would be no economy-wide plunge in consumption. The same is true with asset prices in other markets.

Could this tank some banks that made bad loans? Sure, that’s what markets are for. Companies that are not competent are supposed to go out of business. The idea that this will create some financial crisis where we can’t undertake normal business transactions has zero foundation in reality.

In short, Mr. Cohan’s piece is just irresponsible fearmongering. There apparently is a big market for this stuff, but this is little reason to take it seriously.

 

Addendum

I should have mentioned that William Cohan’s columns forecasting disaster are a regular feature in the NYT, see here, here, and here.

In a New York Times column today, William Cohan, a writer and former investment banker, warned of impending disaster if the Federal Reserve Board does not quickly move away from its low interest rate policy. Cohan tells readers:

“But many people wonder if Jerome Powell, the chairman of the Fed, has reckoned with the power of the easy-money monster the central bank spawned all those years ago. They worry that Mr. Powell has helped inflate bubbles in housing, lumber, copper, Bitcoin and stocks, bonds and other assets. The evidence is mounting: The Consumer Price Index, a gauge of inflation, rose 5 percent in May from a severely depressed number a year earlier — the fastest rate in nearly 13 years. And that’s just one worrisome indicator.”

The piece goes on to warn of all the horrible things to come if Powell does not soon start to raise interest rates. The basic story is that we have bubbles in many markets that will collapse, costing investors in these bubbles hundreds of billions, or even trillions, of dollars. [It is worth noting that lumber prices have been plummeting in the last few weeks.]

If this sounds familiar, that might be because Mr.  Cohan had a very similar column in the NYT a couple of years ago. In August of 2019, Cohan warned that “only the Fed can save us now,” and urged Fed Chair Jerome Powell to stand up to Trump and raise interest rates. The piece begins:

“Here’s the moment I realized the next financial crisis is inevitable.”

We find out that this was the moment when he realized he was listening to a speech where Powell indicated that he had no plans to raise interest rates.

Later Cohan explains:

“But although a sense of euphoria spread through the room, as well as through debt and equity markets, I was overcome by a sense of dread. A decade of historically low interest rates has begun to warp our economy. As we learned to our collective horror during the 2008 financial crisis, a period of sustained low interest rates forces investors on a desperate search for higher yields, inflating asset prices and the risks of owning loans and debt of all kinds.”

To  be clear, there are undoubtedly many bubbles in the U.S. economy right now. Bitcoin is the most obvious example, but we also have the proliferation of non-fungible tokens, as well as many stock prices that bear no relationship to plausible estimates of future earnings.

But none of this sets the stage for a 2008-09 disaster. In the years leading up to the Great Recession, the housing bubble was driving the economy. This was easy to see for anyone who bothered to look at the GDP data the Commerce Department publishes every quarter. Residential construction had increased from its normal rate, which is around 3.5 percent of GDP, to a peak of 6.8 percent of GDP. Similarly, soaring house prices led to a consumption boom, as people spent based on the bubble created equity in their home.

When the bubble collapsed, and housing construction fell to less than 2.0 percent of GDP due to overbuilding during the bubble. There was nothing to replace the loss of 4.8 percentage points of GDP or annual demand (more than $1 trillion in today’s economy). Similarly, the bubble driven consumption boom collapsed, costing us another 3-4 percentage points in demand.

This was the story of the Great Recession. The financial crisis was just the market working its magic on the banks and other financial institutions that had been reckless in issuing and marketing loans.

If we were to see a similar collapse in asset prices today, it would have no comparable impact on the real economy. If the roughly $1 trillion market capitalization in the digital currency market went to zero tomorrow, how would that affect the real economy? Some Bitcoin millionaires and billionaires would be very unhappy, but so what? There would be no economy-wide plunge in consumption. The same is true with asset prices in other markets.

Could this tank some banks that made bad loans? Sure, that’s what markets are for. Companies that are not competent are supposed to go out of business. The idea that this will create some financial crisis where we can’t undertake normal business transactions has zero foundation in reality.

In short, Mr. Cohan’s piece is just irresponsible fearmongering. There apparently is a big market for this stuff, but this is little reason to take it seriously.

 

Addendum

I should have mentioned that William Cohan’s columns forecasting disaster are a regular feature in the NYT, see here, here, and here.

Want to search in the archives?

¿Quieres buscar en los archivos?

Click Here Haga clic aquí