That’s what he told us in his column today, because he sure didn’t make much of an argument. Lane cites several recent papers showing that the minimum wage has no negative effect on employment (including my colleague, John Schmitt’s paper). He then notes that these studies could be right, but he also refers to research by David Neumark of the University of California at Irvine and William Wascher of the Federal Reserve that shows the last minimum wage hike (from $5.15 an hour in 2007 to $7.25 in 2009) lowered employment of young people by 300,000.
He then warns that if their research is right, and we push the minimum wage too high, then we could be hurting the people we are trying to help. He also points out that even if the research showing no employment effect is right, then we would still be hurting other workers by pushing up prices or wage compression. He then proposes spending more money on the earned-income tax credit (EITC) as an alternative to a higher minimum wage.
Okay, let’s have some fun here. Lane’s bad story is that 300,000 fewer workers would be employed. That sounds really awful, after all these are the people we are trying to help. But let’s think about this one for a moment. The jobs we are talking about tend to be high turnover jobs that workers only hold for relatively short periods of time. The research that Lane is depending on shows that at any point in time 300,000 fewer workers will be employed as a result of a minimum wage hike of more than 40 percent. In effect, this means that workers will on average have to spend more time between jobs looking for work.
More than 3 million workers were in the affected wage band between the old and new minimum wage. If we assume that on average they worked 10 percent less (the 300,000 job loss) and that their average hourly wage gain was 20 percent (half of the wage increase) then on average these workers will net roughly 8 percent more in pay each year (120 percent * 90 percent), while working 10 percent fewer hours. Pretty awful story, huh? And that’s based on the research that finds a negative effect on employment.
As far as the rest Lane’s story, yes, the higher pay for minimum wage workers comes mostly out of other workers’ pockets. (Some comes from profits and some comes from increased productivity.) This is true of everyone’s pay. If protectionists did not dominate national policy we could import more doctors and bring our doctors’ pay in line with pay in other wealthy countries and save other workers close to $100 billion a year in health care costs. But the money that goes out of workers’ pockets to support the excess pay of doctors, Wall Street bankers or CEOs doesn’t concern Lane, only the money that goes to pay custodians, retail clerks, and dishwashers.
But the best part is the idea that the EITC is somehow free. In fact we need government revenue to pay the EITC, which requires taxes. Taxes will also come out of workers’ pockets and also have a distorting effect on the economy. In addition, there are also costs associated with administering the EITC. While it does not have nearly the level of fraud claimed by its critics, clearly some portion of the money is paid out improperly. And, low-wage workers often have difficulty dealing with tax returns. Many throw hundreds of dollars in the garbage paying tax preparation services in order to claim their EITC.
In short, Lane doesn’t really have much of a case against a higher minimum wage even if we accept his bad story about job loss. And, he seems to have imagined that there is an alternative costless way to get more money to low-paid workers.
There is one final point worth noting in the context of proposals to increase the minimum wage. From its inception in 1938 to 1969, the minimum wage rose in step with economy-wide productivity growth. If we had continued this policy over the last four decades the minimum wage would be $16.50 an hour. Even if the minimum wage is raised to 9.00 an hour, minimum wage workers would get none of the benefits of economic growth over the last four decades.
Addendum:
Charles Lane wrote to tell me that I had misrepresented the Neumark estimate of the employment impact of the last minimum wage hike. Neumark was only referring to the impact of the last phase of the increase (which was phased in over three years) from 6.55 to 7.25, a rise of 10.7 percent, not the increase from 5.15 that I had referred to in my initial note.
I should have looked at his reference in the column. I’ll admit that I have not taken Neumark’s work on the minimum wage seriously since he uncritically took data from the fast food industry lobby to try to argue the case that the minimum wage caused unemployment. It turned out that the industry had cooked the data. When Neumark used data that was independently collected he found the same result as everyone else, the minimum wage did not increase unemployment.
But, even if we take Neumark’s numbers at face value, we still don’t get much of a horror story. A rise of 10.7 percent means that the average gain would be around 8.9 percent. (To see the logic, imagine that hourly earnings were originally distributed evenly between the prior minimum wage of $5.15 an hour and the new minimum wage of $7.25 an hour. After two rounds of minimum wage hikes, two thirds of the workers are now sitting at $6.55 an hour or close to it. The remaining third are evenly distributed across the remaining band. This would mean that two-thirds of the affected workers would recieve the full 10.7 percent increase, while the remaining third would see an average hike of 5.4 percent. This gives an average increase of 8.9 percent.)
Neumark’s estimate would then imply workers are on average putting in 10 percent fewer hours and taking home 2 percent less money. This is based on the assessment of an economist who has devoted a career to trashing the minimum wage. Can’t say that sounds like a horror story.
That’s what he told us in his column today, because he sure didn’t make much of an argument. Lane cites several recent papers showing that the minimum wage has no negative effect on employment (including my colleague, John Schmitt’s paper). He then notes that these studies could be right, but he also refers to research by David Neumark of the University of California at Irvine and William Wascher of the Federal Reserve that shows the last minimum wage hike (from $5.15 an hour in 2007 to $7.25 in 2009) lowered employment of young people by 300,000.
He then warns that if their research is right, and we push the minimum wage too high, then we could be hurting the people we are trying to help. He also points out that even if the research showing no employment effect is right, then we would still be hurting other workers by pushing up prices or wage compression. He then proposes spending more money on the earned-income tax credit (EITC) as an alternative to a higher minimum wage.
Okay, let’s have some fun here. Lane’s bad story is that 300,000 fewer workers would be employed. That sounds really awful, after all these are the people we are trying to help. But let’s think about this one for a moment. The jobs we are talking about tend to be high turnover jobs that workers only hold for relatively short periods of time. The research that Lane is depending on shows that at any point in time 300,000 fewer workers will be employed as a result of a minimum wage hike of more than 40 percent. In effect, this means that workers will on average have to spend more time between jobs looking for work.
More than 3 million workers were in the affected wage band between the old and new minimum wage. If we assume that on average they worked 10 percent less (the 300,000 job loss) and that their average hourly wage gain was 20 percent (half of the wage increase) then on average these workers will net roughly 8 percent more in pay each year (120 percent * 90 percent), while working 10 percent fewer hours. Pretty awful story, huh? And that’s based on the research that finds a negative effect on employment.
As far as the rest Lane’s story, yes, the higher pay for minimum wage workers comes mostly out of other workers’ pockets. (Some comes from profits and some comes from increased productivity.) This is true of everyone’s pay. If protectionists did not dominate national policy we could import more doctors and bring our doctors’ pay in line with pay in other wealthy countries and save other workers close to $100 billion a year in health care costs. But the money that goes out of workers’ pockets to support the excess pay of doctors, Wall Street bankers or CEOs doesn’t concern Lane, only the money that goes to pay custodians, retail clerks, and dishwashers.
But the best part is the idea that the EITC is somehow free. In fact we need government revenue to pay the EITC, which requires taxes. Taxes will also come out of workers’ pockets and also have a distorting effect on the economy. In addition, there are also costs associated with administering the EITC. While it does not have nearly the level of fraud claimed by its critics, clearly some portion of the money is paid out improperly. And, low-wage workers often have difficulty dealing with tax returns. Many throw hundreds of dollars in the garbage paying tax preparation services in order to claim their EITC.
In short, Lane doesn’t really have much of a case against a higher minimum wage even if we accept his bad story about job loss. And, he seems to have imagined that there is an alternative costless way to get more money to low-paid workers.
There is one final point worth noting in the context of proposals to increase the minimum wage. From its inception in 1938 to 1969, the minimum wage rose in step with economy-wide productivity growth. If we had continued this policy over the last four decades the minimum wage would be $16.50 an hour. Even if the minimum wage is raised to 9.00 an hour, minimum wage workers would get none of the benefits of economic growth over the last four decades.
Addendum:
Charles Lane wrote to tell me that I had misrepresented the Neumark estimate of the employment impact of the last minimum wage hike. Neumark was only referring to the impact of the last phase of the increase (which was phased in over three years) from 6.55 to 7.25, a rise of 10.7 percent, not the increase from 5.15 that I had referred to in my initial note.
I should have looked at his reference in the column. I’ll admit that I have not taken Neumark’s work on the minimum wage seriously since he uncritically took data from the fast food industry lobby to try to argue the case that the minimum wage caused unemployment. It turned out that the industry had cooked the data. When Neumark used data that was independently collected he found the same result as everyone else, the minimum wage did not increase unemployment.
But, even if we take Neumark’s numbers at face value, we still don’t get much of a horror story. A rise of 10.7 percent means that the average gain would be around 8.9 percent. (To see the logic, imagine that hourly earnings were originally distributed evenly between the prior minimum wage of $5.15 an hour and the new minimum wage of $7.25 an hour. After two rounds of minimum wage hikes, two thirds of the workers are now sitting at $6.55 an hour or close to it. The remaining third are evenly distributed across the remaining band. This would mean that two-thirds of the affected workers would recieve the full 10.7 percent increase, while the remaining third would see an average hike of 5.4 percent. This gives an average increase of 8.9 percent.)
Neumark’s estimate would then imply workers are on average putting in 10 percent fewer hours and taking home 2 percent less money. This is based on the assessment of an economist who has devoted a career to trashing the minimum wage. Can’t say that sounds like a horror story.
Read More Leer más Join the discussion Participa en la discusión
Read More Leer más Join the discussion Participa en la discusión
This one is well-deserved. The Post got the George W. Polk award for Medical Reporting for the series “Biased Research, Big Profits” by Peter Whoriskey. It was a well-researched and reported series. I take back 17 of the bad things I’ve said about the WAPO. I’m not commenting on how many that leaves.
This one is well-deserved. The Post got the George W. Polk award for Medical Reporting for the series “Biased Research, Big Profits” by Peter Whoriskey. It was a well-researched and reported series. I take back 17 of the bad things I’ve said about the WAPO. I’m not commenting on how many that leaves.
Read More Leer más Join the discussion Participa en la discusión
It is really easy and apparently fun for some people to use scary numbers about health care costs. The trick is to take numbers over a long period of time that are not adjusted for inflation or income growth. Of course no normal person has any idea what their income will look like in nominal dollars 50-60 years out, so you can scare people to death with this sort of stupid trick.
That is what David Goldhill, the chief executive of GSN, did in an op-ed in the NYT. He told readers about a newly hired 23 year-old at his company who is earning $35,000 a year:
“I have estimated that our 23-year-old employee will bear at least $1.8 million in health care costs over her lifetime.”
Do any NYT readers have any idea what this $1.8 million figure means either in today’s dollars or as a share of this worker’s lifetime income? The answer is almost certainly no. It is unlikely that even 1 percent of NYT readers (I know they are highly educated) has any clue what $1.8 million means over this worker’s lifetime.
The question then is why did the NYT let Goldhill use the number? He surely could have used a standard discount rate and converted it into 2013 dollars. Alternatively he could have expressed the number as a share of the worker’s lifetime income. The NYT was incredibly irresponsible to let Goldhill just include this $1.8 million number with no context.
It is probably also worth noting that this recipe for curing health care costs would be quickly dismissed by anyone familiar with current expenses. He wants to restrict insurance to catastrophic care (will he arrest people for providing normal insurance?), but he seems to have missed the fact that the overwhelming majority of health care costs fall into this category. His plan may deter people from getting necessary check-ups and preventive care, but would have little impact on the costs that are driving up the country’s health care bill.
He apparently is also unfamiliar with the experience with health care costs in other countries, which pay an average of less than half as much per person as the United States, while getting comparable health outcomes. The U.S. would be looking at large budget surpluses rather than deficits if per person health care costs were comparable to those in other countries.
It is really easy and apparently fun for some people to use scary numbers about health care costs. The trick is to take numbers over a long period of time that are not adjusted for inflation or income growth. Of course no normal person has any idea what their income will look like in nominal dollars 50-60 years out, so you can scare people to death with this sort of stupid trick.
That is what David Goldhill, the chief executive of GSN, did in an op-ed in the NYT. He told readers about a newly hired 23 year-old at his company who is earning $35,000 a year:
“I have estimated that our 23-year-old employee will bear at least $1.8 million in health care costs over her lifetime.”
Do any NYT readers have any idea what this $1.8 million figure means either in today’s dollars or as a share of this worker’s lifetime income? The answer is almost certainly no. It is unlikely that even 1 percent of NYT readers (I know they are highly educated) has any clue what $1.8 million means over this worker’s lifetime.
The question then is why did the NYT let Goldhill use the number? He surely could have used a standard discount rate and converted it into 2013 dollars. Alternatively he could have expressed the number as a share of the worker’s lifetime income. The NYT was incredibly irresponsible to let Goldhill just include this $1.8 million number with no context.
It is probably also worth noting that this recipe for curing health care costs would be quickly dismissed by anyone familiar with current expenses. He wants to restrict insurance to catastrophic care (will he arrest people for providing normal insurance?), but he seems to have missed the fact that the overwhelming majority of health care costs fall into this category. His plan may deter people from getting necessary check-ups and preventive care, but would have little impact on the costs that are driving up the country’s health care bill.
He apparently is also unfamiliar with the experience with health care costs in other countries, which pay an average of less than half as much per person as the United States, while getting comparable health outcomes. The U.S. would be looking at large budget surpluses rather than deficits if per person health care costs were comparable to those in other countries.
Read More Leer más Join the discussion Participa en la discusión
Read More Leer más Join the discussion Participa en la discusión
The Post had a very nice lead front page story, too bad its editors probably won’t see it.
The Post had a very nice lead front page story, too bad its editors probably won’t see it.
Read More Leer más Join the discussion Participa en la discusión
Dylan Matthews has an interesting column discussing former M.I.T. professor Stanley Fischer’s career in the context of the possibility of him replacing Ben Bernanke as Fed chair in the fall. There are a couple of important items that are not mentioned in this discussion.
First, Matthews notes the central role that Fischer played in the I.M.F.’s resolution of the East Asian financial crisis. While this discussion might lead readers to believe the resolution was a success, this crisis actually marked a turning point that led to the major imbalances of the next decade.
Prior to the crisis there were substantial capital flows from rich countries to poor countries, as textbook economics would predict. However as an outcome of the crisis developing countries began to accumulate massive amounts of foreign exchanges reserves, presumably to avoid ever having to be in the same situation as the East Asian countries were placed when they had to deal with the I.M.F. in the crisis.
This led to a huge rise in the value of the dollar and large trade deficits. The gap in demand created by the trade deficit with developing countries was filled in the United States by the housing bubble. The predictable outcome of this situation was the collapse in 2007-09, which is likely to cost the country close to $10 trillion in lost output before the economy fully recovers.
This raises the more general point that Fischer is one of the pillars of the school of thought that central banks should target 2.0 percent inflation and otherwise do nothing. If it is in principle possible for an economic theory to be refuted by evidence, this view of the optimal monetary policy has been decisively discredited.
These items may affect how people would view Stanley Fischer’s qualifications as a candidate for Fed chair.
The piece also gets one other important item wrong. It contrasts the ability of Israel (where Fischer now runs the central bank) as a small country to devalue its currency with the United States, as the holder of the world’s reserve currency.
“If Bernanke halved the value of the dollar relative to, say, the Chinese yuan, that would dramatically increase U.S. exports and probably economic growth, too, but it would also wreak havoc with the global financial system. Every dollar-denominated asset in the world, including all manner of bonds, would plummet in value.”
Actually this is very far from being the case. Most holders of dollar denominated assets are not hugely interested in the value of their assets measured in yuan. (Quick, how many yuan is your 401(k) worth?) While the repercussions of a large fall in the value of the dollar against one or more major currencies are certainly greater than the fall of the Israeli shekel, it is certainly not obvious that a major reduction in its value would have disastrous consequences. In fact, over time it is virtually inevitable.
Dylan Matthews has an interesting column discussing former M.I.T. professor Stanley Fischer’s career in the context of the possibility of him replacing Ben Bernanke as Fed chair in the fall. There are a couple of important items that are not mentioned in this discussion.
First, Matthews notes the central role that Fischer played in the I.M.F.’s resolution of the East Asian financial crisis. While this discussion might lead readers to believe the resolution was a success, this crisis actually marked a turning point that led to the major imbalances of the next decade.
Prior to the crisis there were substantial capital flows from rich countries to poor countries, as textbook economics would predict. However as an outcome of the crisis developing countries began to accumulate massive amounts of foreign exchanges reserves, presumably to avoid ever having to be in the same situation as the East Asian countries were placed when they had to deal with the I.M.F. in the crisis.
This led to a huge rise in the value of the dollar and large trade deficits. The gap in demand created by the trade deficit with developing countries was filled in the United States by the housing bubble. The predictable outcome of this situation was the collapse in 2007-09, which is likely to cost the country close to $10 trillion in lost output before the economy fully recovers.
This raises the more general point that Fischer is one of the pillars of the school of thought that central banks should target 2.0 percent inflation and otherwise do nothing. If it is in principle possible for an economic theory to be refuted by evidence, this view of the optimal monetary policy has been decisively discredited.
These items may affect how people would view Stanley Fischer’s qualifications as a candidate for Fed chair.
The piece also gets one other important item wrong. It contrasts the ability of Israel (where Fischer now runs the central bank) as a small country to devalue its currency with the United States, as the holder of the world’s reserve currency.
“If Bernanke halved the value of the dollar relative to, say, the Chinese yuan, that would dramatically increase U.S. exports and probably economic growth, too, but it would also wreak havoc with the global financial system. Every dollar-denominated asset in the world, including all manner of bonds, would plummet in value.”
Actually this is very far from being the case. Most holders of dollar denominated assets are not hugely interested in the value of their assets measured in yuan. (Quick, how many yuan is your 401(k) worth?) While the repercussions of a large fall in the value of the dollar against one or more major currencies are certainly greater than the fall of the Israeli shekel, it is certainly not obvious that a major reduction in its value would have disastrous consequences. In fact, over time it is virtually inevitable.
Read More Leer más Join the discussion Participa en la discusión
He may well be right. His story is that the yield on junk bonds is currently lower than the earnings yield on stock. Irwin tells readers:
“The stock market’s earnings yield is 6.6 percent, which is actually higher than the 6.1 percent that junk bonds are yielding. Buyers of junk bonds are tolerating lots of risk and not even being compensated. That suggests a market that is somehow out of whack. And there’s a quite plausible case that the Federal Reserve’s quantitative easing policies are part of the story. With the Fed buying billions of Treasury bonds and mortgage backed securities, those who would normally buy those assets have to buy something else. But it’s easy to imagine that this doesn’t affect all assets equally. Investors normally inclined to buy bonds may not be willing to move that money into stocks, but will buy junk bonds, even if the prices seem unfavorable.”
The big story here is that last sentence:
“Investors normally inclined to buy bonds may not be willing to move that money into stocks, but will buy junk bonds, even if the prices seem unfavorable.”
Okay, so we have people controlling funds with billions or even tens of billions of dollars who can’t figure out that they should move from junk bonds to stocks even when current prices suggest that the stocks provide a much better risk/return trade-off. Given that almost all of these people were buying into the stock market in the late nineties, when price to earnings ratios crossed 30, and that almost none of them saw the housing bubble in the last decade, Irwin’s observation is entirely plausible.
This does raise the question as to why the people who manage money funds earn many hundreds of thousands of dollars a year and often many million? If a fund manager just holds bonds rather than stocks out of habit then this person clearly has few skills. Rather than paying someone millions of dollars to cost a fund big bucks in virtually guaranteed losses isn’t it possible to find some high school kid who could be paid the minimum wage. After all, if we don’t expect people who manage funds to have any investment skills why are the jobs so highly paid?
He may well be right. His story is that the yield on junk bonds is currently lower than the earnings yield on stock. Irwin tells readers:
“The stock market’s earnings yield is 6.6 percent, which is actually higher than the 6.1 percent that junk bonds are yielding. Buyers of junk bonds are tolerating lots of risk and not even being compensated. That suggests a market that is somehow out of whack. And there’s a quite plausible case that the Federal Reserve’s quantitative easing policies are part of the story. With the Fed buying billions of Treasury bonds and mortgage backed securities, those who would normally buy those assets have to buy something else. But it’s easy to imagine that this doesn’t affect all assets equally. Investors normally inclined to buy bonds may not be willing to move that money into stocks, but will buy junk bonds, even if the prices seem unfavorable.”
The big story here is that last sentence:
“Investors normally inclined to buy bonds may not be willing to move that money into stocks, but will buy junk bonds, even if the prices seem unfavorable.”
Okay, so we have people controlling funds with billions or even tens of billions of dollars who can’t figure out that they should move from junk bonds to stocks even when current prices suggest that the stocks provide a much better risk/return trade-off. Given that almost all of these people were buying into the stock market in the late nineties, when price to earnings ratios crossed 30, and that almost none of them saw the housing bubble in the last decade, Irwin’s observation is entirely plausible.
This does raise the question as to why the people who manage money funds earn many hundreds of thousands of dollars a year and often many million? If a fund manager just holds bonds rather than stocks out of habit then this person clearly has few skills. Rather than paying someone millions of dollars to cost a fund big bucks in virtually guaranteed losses isn’t it possible to find some high school kid who could be paid the minimum wage. After all, if we don’t expect people who manage funds to have any investment skills why are the jobs so highly paid?
Read More Leer más Join the discussion Participa en la discusión
Okay, I made up that number, but suppose that I did calculate the amount of money that average holder of government bonds gets in interest each year and compared it to what we spent on children. According to the logic that they use at the Urban Institute (as recounted by Ezra Klein) I would have demonstrated a tendency for our government to favor bondholders at the expense of our nation’s children.
The sophisticates out there would surely point out that bondholders paid for their bonds and therefore are entitled to the interest they get on these bonds. Bravo!
Now if anyone with the same level of sophistication entered the halls of the Urban Institute they could point out that we run a old age, survivors and disability insurance program through the government (Social Security) as well as a senior health insurance program (Medicare). The fact that people collect benefits from these programs reflects the fact that they paid premiums during their working lifetimes — just like bondholders get interest because they paid for their bonds.
In fact, as the Urban Institute has shown, on average Social Security beneficiaries will get slightly less back in benefits than what they paid into the program in premiums. Medicare beneficiaries will get more back, but this is because we pay way more money to our doctors, drug companies and other health care providers than any other people on the planet. In other words, the big gainers here are the providers, not our seniors.
Anyhow, the comparison of payments to seniors with payments to children makes as much sense as comparing payments to bondholders with payments to children. It is understandable that people who want to cut Social Security and Medicare would make such comparisons (or cut interest payments to bondholders), but it is hard to see why anyone engaged in honest policy debate would take such comparisons seriously.
Okay, I made up that number, but suppose that I did calculate the amount of money that average holder of government bonds gets in interest each year and compared it to what we spent on children. According to the logic that they use at the Urban Institute (as recounted by Ezra Klein) I would have demonstrated a tendency for our government to favor bondholders at the expense of our nation’s children.
The sophisticates out there would surely point out that bondholders paid for their bonds and therefore are entitled to the interest they get on these bonds. Bravo!
Now if anyone with the same level of sophistication entered the halls of the Urban Institute they could point out that we run a old age, survivors and disability insurance program through the government (Social Security) as well as a senior health insurance program (Medicare). The fact that people collect benefits from these programs reflects the fact that they paid premiums during their working lifetimes — just like bondholders get interest because they paid for their bonds.
In fact, as the Urban Institute has shown, on average Social Security beneficiaries will get slightly less back in benefits than what they paid into the program in premiums. Medicare beneficiaries will get more back, but this is because we pay way more money to our doctors, drug companies and other health care providers than any other people on the planet. In other words, the big gainers here are the providers, not our seniors.
Anyhow, the comparison of payments to seniors with payments to children makes as much sense as comparing payments to bondholders with payments to children. It is understandable that people who want to cut Social Security and Medicare would make such comparisons (or cut interest payments to bondholders), but it is hard to see why anyone engaged in honest policy debate would take such comparisons seriously.
Read More Leer más Join the discussion Participa en la discusión
Robert Rubin is best known as the man who pocketed more than $100 million as a top Citigroup honcho as it played a central role in pumping up the housing bubble that sank the economy. However, because of the incompetence (corruption?) of the Washington media, he is much better known as a great hero of economic policy.
Ezra Klein helps to feed this myth when he tells us of the great virtue of deficit reduction in the Clinton years.
“Back in the 1990s, we knew why we feared deficits. They raised interest rates and “crowded out” private borrowing. This wasn’t an abstract concern. In 1991, the interest rate on 10-year Treasurys was 7.86 percent. That meant the interest rate for private borrowing was, for the most part, much higher, choking off investment and economic growth.
“Enter Clintonomics. The theory was simple: Bring down deficits, and you’d bring down interest rates. Bring down interest rates, and you’d make it easier for the private sector to invest and grow. Make it easier for the private sector to invest and grow, and the economy would boom.
“The theory was correct. By the end of Clinton’s term, the interest rate on 10-year Treasurys had fallen to 5.26 percent — lower than it had been in 30 years. And the economy was, indeed, booming. ‘The deficit reduction increased confidence, helped bring interest rates down, and that, in turn, helped generate and sustain the economic recovery, which, in turn, reduced the deficit further,’ Treasury Secretary Robert Rubin said in 1998.”
Okay, fans of intro economics know that it is the real interest — the difference between the nominal interest rate and the inflation rate — that matters for investment, not the nominal interest rate. The inflation rate in the first half of 1991 was over 5.0 percent. This means that the real interest rate — the rate that all economists understand is relevant for growth — around 2.5 percent.
Is that bad? If we take the last half year of the Clinton administration (and not some cherry picked low-point) the interest rate on 10-year Treasury bonds averaged around 5.7 percent. The inflation rate for the second half of 2000 averaged around 3.5 percent. This gives us a a real interest rate of 2.2 percent (5.7 percent minus 3.5 percent equals 2.2 percent).
So we are supposed to believe that the difference between the 2.5 percent real interest rate in the high deficit pre-Clinton years and the 2.2 percent real interest rate at the end of the Clinton years is the difference between the road to hell and the path to prosperity? This is the sort of nonsense that you tell to children. It might past muster with DC pundits, but serious people need not waste their time.
The story of the boom of the Clinton years was an unsustainable stock bubble. This led to a surge in junk investment like Pets.com. It led to an even larger surge in consumption. People spent based on their stock wealth, pushing the saving rate to a then record low of 2.0 percent (compared to an average of 8.0 percent in the pre-bubble decades).
Robert Rubin acolytes may not like it, but the deficit reduction was a minor actor in the growth of the 1990s. The bubble was the real story. That may not be a smart thing to say if you’re looking for a job in the Obama administration, but it happens to be the truth. You have to really torture the data to get a different conclusion.
Robert Rubin is best known as the man who pocketed more than $100 million as a top Citigroup honcho as it played a central role in pumping up the housing bubble that sank the economy. However, because of the incompetence (corruption?) of the Washington media, he is much better known as a great hero of economic policy.
Ezra Klein helps to feed this myth when he tells us of the great virtue of deficit reduction in the Clinton years.
“Back in the 1990s, we knew why we feared deficits. They raised interest rates and “crowded out” private borrowing. This wasn’t an abstract concern. In 1991, the interest rate on 10-year Treasurys was 7.86 percent. That meant the interest rate for private borrowing was, for the most part, much higher, choking off investment and economic growth.
“Enter Clintonomics. The theory was simple: Bring down deficits, and you’d bring down interest rates. Bring down interest rates, and you’d make it easier for the private sector to invest and grow. Make it easier for the private sector to invest and grow, and the economy would boom.
“The theory was correct. By the end of Clinton’s term, the interest rate on 10-year Treasurys had fallen to 5.26 percent — lower than it had been in 30 years. And the economy was, indeed, booming. ‘The deficit reduction increased confidence, helped bring interest rates down, and that, in turn, helped generate and sustain the economic recovery, which, in turn, reduced the deficit further,’ Treasury Secretary Robert Rubin said in 1998.”
Okay, fans of intro economics know that it is the real interest — the difference between the nominal interest rate and the inflation rate — that matters for investment, not the nominal interest rate. The inflation rate in the first half of 1991 was over 5.0 percent. This means that the real interest rate — the rate that all economists understand is relevant for growth — around 2.5 percent.
Is that bad? If we take the last half year of the Clinton administration (and not some cherry picked low-point) the interest rate on 10-year Treasury bonds averaged around 5.7 percent. The inflation rate for the second half of 2000 averaged around 3.5 percent. This gives us a a real interest rate of 2.2 percent (5.7 percent minus 3.5 percent equals 2.2 percent).
So we are supposed to believe that the difference between the 2.5 percent real interest rate in the high deficit pre-Clinton years and the 2.2 percent real interest rate at the end of the Clinton years is the difference between the road to hell and the path to prosperity? This is the sort of nonsense that you tell to children. It might past muster with DC pundits, but serious people need not waste their time.
The story of the boom of the Clinton years was an unsustainable stock bubble. This led to a surge in junk investment like Pets.com. It led to an even larger surge in consumption. People spent based on their stock wealth, pushing the saving rate to a then record low of 2.0 percent (compared to an average of 8.0 percent in the pre-bubble decades).
Robert Rubin acolytes may not like it, but the deficit reduction was a minor actor in the growth of the 1990s. The bubble was the real story. That may not be a smart thing to say if you’re looking for a job in the Obama administration, but it happens to be the truth. You have to really torture the data to get a different conclusion.
Read More Leer más Join the discussion Participa en la discusión