Earlier this week, John Schmitt and I released a CEPR report on the rise of "bad jobs" over the past three decades. The new report is a follow-up to "Where Have All the Good Jobs Gone?", which CEPR released in July. Together, the two papers are like looking at two sides of the same, depressing coin.
We define a bad job as one that pays less than $37,000 per year, lacks employer-provided health insurance, and has no employer-sponsored retirement plan. In 2010, the most recently available data, about 24 percent of U.S. workers were in a bad job, up from 18 percent of workers in 1979. The share of women in bad jobs only increased about 1 percentage point between 1979 and 2010; for men, there was a 10 percentage-point increase over the same period. But, at every point in the last 30 years, women were still more likely than men to be in a bad job.
The increase in bad jobs occurred even as the workforce, on average, became more educated and experienced --just the opposite of what we would not just hope for, but expect, from the economy.
The economy added 96,000 jobs in August, roughly the pace needed to keep even with the growth of the labor force, according to the Bureau of Labor Statistics' latest employment report. The unemployment rate also dropped to 8.1 percent, but this was entirely due to a drop in the labor force as the reported employment in the household survey edged downward. There was more negative news in the household survey: The jobs numbers for June and July were also both revised down by roughly 20,000 bringing the three month average to 94,000, and the employment-to-population ratio (EPOP) dropped 0.2 percentage points to 58.3 percent.
But there were a couple of bright spots. The percentage of unemployment due to people voluntarily quitting their jobs, a measure of workers’ confidence in the labor market, rose to 7.5 percent, putting it near the winter levels. Also, the number of workers involuntarily working part-time, as well as the number of discouraged workers, both fell, pushing the broad U-6 measure of labor market slack to its lowest point since January of 2009. Undoubtedly some seasonal factors depressed August’s job number. While September will likely show a better story, job growth is barely fast enough to keep pace with the growth of labor force.
For a more in-depth analysis, check out the latest Jobs Byte.
My colleague John Schmitt provided an excellent overview of my recent paper about organized labor in the United States and Canada, “Protecting Fundamental Labor Rights: Lessons from Canada for the United States.” In that post, John laid out the basic argument – that there are two key differences between the United States and Canada that have allowed the unionization rate in Canada to remain stable over the past half century while it has plummeted in the United States. The first of these differences is the process by which workers form unions in the two countries, which is what I want to take a closer look at in this blog post. In a second post in the days to come, I'll look at the other key difference – how bargaining impasses are handled after the initial formation of a union.
Both the U.S. and Canada have two different processes for forming unions – mandatory elections and card check. Under mandatory elections, unless an employer voluntarily recognizes a union, employees who want to form a union at their workplace must file a petition for an election with the labor board – a government agency that enforces the collective bargaining laws – showing support of at least 35 percent of the workforce. (It's 35 percent in the United States and 35-45 percent in Canada depending upon the province.) This is usually done with signed authorization cards, and employees or unions on their behalf will typically gather much more support than this before filing the petition, usually around 65 percent. The labor board, after verifying the cards against payroll records of the employer, will schedule and hold an election in which employees can vote on whether or not to form a union.
Under card check, the process is more streamlined. If employees are able to show majority support (ranging from 50-65 percent), the labor board will simply verify the signed authorization cards, and, if there is the required majority of the proposed bargaining unit, the board will certify the union. If there is less than the required level of support, the labor board will schedule and hold an election. In both mandatory elections and card check, once a union has been certified by the labor board, the employer is required to recognize the union and both the employer and the union are required to “bargain in good faith.”
I largely agree with Leonhardt's conclusion as stated: “If you’re trying to understand why every income group except for the affluent has taken an income cut over the last decade, you probably shouldn’t put the minimum wage at the top of your list of causes.”
But, I worry that much of the piece will give readers the wrong impression about the minimum wage as a policy to fight inequality.
(1) That the minimum wage is not *at the top* of the list for explaining inequality does not mean that it is not an *important* determinant of inequality.
The minimum wage is just one of a constellation of policies that have pushed inequality up over the last three decades --high unemployment (not just the Great Recession, but most of the last 30 years excluding 1996 to 2000), declining unionization rates, pro-corporate trade deals, deregulation of many previously well-paying industries, privatization of many state-and-local government jobs, a dysfunctional immigration system, poor enforcement of existing labor laws, and others. The common thread running through all of these policies is that they have all served to undermine the bargaining power of workers relative to their employers. Different policies have had effects on different parts of the workforce (by wage level or race or gender) at different times. But, they have uniformly acted to pull the bottom out of the labor market.
Given all these forces pushing in the same direction, it would be odd that the minimum wage was at the top of the list.
My CEPR colleague, Kris Warner, has a new paper on what we can learn about labor law here in the United States from the experience of our neighbors in Canada. The whole paper is worth a read, but I particularly like two of the graphs. The first shows that Canada and the United States were on a very similar unionization path from about 1920 through the 1960s. At that point, unionization rates in the two countries diverged sharply.
The second graph presents the overall union coverage rates in each of the U.S. states and Canadian provinces. All but one of the Canadian provinces lie entirely above all of the U.S. states. Only New York and Alaska edge out the least unionized Canadian province of Alberta.
Kris emphasizes two important differences between the Canada and the United States. The first is that, in the private sector, it is generally much easier for workers to form a union in Canada, primarily because most workers there can form a legally recognized union based on collecting verified signatures. In the United States, of course, even after workers sign cards asking for recognition, they then face a second hurdle in the form of a National Labor Relations Board election (unless the employer decides to recognize the union based on the collected cards). As John Logan and others have documented (pdf), an entire “union avoidance” industry of consultants and lawyers has arisen in the United States over the last several decades to intimidate workers during the NLRB election process, and this industry has generally been highly effective.
The second key structural difference between Canada and the United States, says Kris, is that Canadian workers can rely on first-contract arbitration to ensure that they will secure a contract after legal recognition. In the United States, even after workers win an election, they only reach a contract in a bit over half cases (see John-Paul Ferguson excellent paper (pdf)).
Along with former Senator Alan Simpson, Erskine Bowles has become known to much of the public as the co-chair of President Obama’s deficit commission. The two of them produced a report that is viewed by many in the media and leading Democrats in Congress as providing the basis for a “Grand Bargain” on a long-term deficit reduction package.
However, in addition to his duties on President Obama’s deficit commission, Erskine Bowles also has a day job. In fact he has many of them. Over the last decade, Mr. Bowles has sat on a large number of corporate boards. This is in addition to serving as the President of the University of North Carolina from 2005 to 2010 and unsuccessful runs for a U.S. Senate seat in North Carolina in 2002 and 2004.
Some of the companies for which Mr. Bowles served as a director have gained considerable notoriety in recent years. He served on the board of Krispy Kreme, the upstart doughnut company that was briefly a Wall Street darling. He also sat on the board of General Motors from June of 2005 until it went into bankruptcy in the spring of 2009. He joined the board of Morgan Stanley, the Wall Street investment bank, near the peak of the housing bubble in December of 2005. He remains on its board today. He also joined the board of Facebook in September of last year.
The paper’s authors, CEPR Senior Economist John Schmitt and Research Assistant Janelle Jones, also wrote a series of posts for the CEPR Blog. John wrote this post on good jobs by education level. John also penned this post, which uses the data to debunk the technological change story. Janelle wrote this post on good jobs and gender. She also wrote this one looking at employment-sponsored retirement plans as well as this one examining employer provided health insurance. University of Iowa History Professor Colin Gordon posted this interactive graph on the CEPR blog summarizing the study’s findings.
CEPR on Honduras CEPR’s Senior Associate for International Policy Alex Main teamed up with Rights Action’s Annie Bird and Karen Spring to release “Collateral Damage of a Drug War,” based on the authors’ investigation in Honduras into the May 11, 2012 deaths of four people in a DEA-related counternarcotics operation in the Moskitia region. The authors conducted extensive interviews with survivors, eyewitnesses, and U.S. and Honduran government officials, finding inconsistencies between survivor accounts and the statements of government officials.
In his RNC speech last night, Rick Santorum claimed that the U.S. poverty rate would be close to zero if all of us here in the land of the free just did three simple things: (1) worked full-time, year-round (for every year of our entire working life); (2) graduated from high school (regardless of the quality of that education), and (3) got married (regardless of the quality of that marriage).
Absent massive government investments in publicly subsidized jobs combined with a federal mandate that Americans never get sick or disabled, it is hard to imagine how Santorum's simple thing number 1 is achievable. (And, if we let Americans in Charles Murray's "lower tribe" have children, then we need big new investments in child care—Santorum, I know is a big fan of the former, to the point of making childbearing mandatory after conception, but not so much the latter.)
So let's just focus on the other two: finishing high school and universal marriage. A quick glance at the educational and marital demographics of poverty are all it takes to dismiss Santorum's arguments here.
First, marriage. As the table below shows, nearly two-thirds of working-age adults (25-64, the same age range used by Santorum) with incomes below the meagre federal poverty line either are currently married or have been married. Over one-in-three are currently married and not separated. So marriage is clearly no panacea.
Working-Age Adults (25-64) with Below-Poverty Income by Marital Status, 2010
Currently or Previously Married
-Currently Married but Separated
-Divorced or Widowed
Second, finishing high school. As the table below shows, the vast majority of adults with below-poverty incomes finished high school, and more than one in three have education beyond that. Some 2.3 million working-age adults with college degrees had below-poverty incomes in 2010.
Working-Age Adults (25-64) with Below-Poverty Income by Educational Attainment, 2010
High School Grad or Higher
High School Grad Only
HS plus Some College
Bachelor's or Higher
No High School Diploma
Millions of married Americans with high school diplomas and beyond live below the poverty line today. Scolds like Santorum deny this reality and blame individual Americans for their economic struggles because they don't want to acknowledge that the real responsibility lies with failed conservative economic policies, and the incredible economic mismanagement of, among others, Robert Rubin, Alan Greenspan, and Ben Bernanke.
I remember being struck several years ago by David Brooks' odd use of the adjective "disorganized" to describe single-parent families. As he put it in the New York Times in 2007: "A human capital agenda ... means preserving low income-tax rates .... [and] creating high-quality preschools for children from disorganized single-parent homes." I'm all for high-quality preschools, although for reasons that have little to do with either home organization or single-parenthood. Personally, I think pre-K should be universal, and shouldn't be limited to children living in single-parent households whose parents aren't able to keep their closets and counters clutter-free. But I guess if, like Brooks, your main goal is preserving low taxes for the 1 percent, you might think this kind of rationing of pre-K is necessary.
I suspected at the time that "disorganized single-parent" was simply a more genteel and NYT-reader-friendly way of saying what Charles Murray once said at a Capitol Hill symposium about single mothers: "There is a dirty little secret about the problem of out-of-wedlock births to poor women. The dirty little secret is that very large numbers of them are rotten mothers." That, of course, is something that one can say in the editorial pages of the Wall Street Journal or other Murdoch papers, but not in the Grey Lady.
The recent work of CEPR’s John Schmitt and Janelle Jones shines a harsh spotlight on the dramatic decline in “good jobs” over the last generation. In Where Have All the Good Jobs Gone?, Schmitt and Jones show that the share of good jobs (defined by an earnings threshold of $18.50/hr and the job-based provision of health coverage and a retirement plan) has fallen—even as the age and educational attainment of the workforce has risen.
The interactive graph below summarizes their findings: Select any combination of demographics (all workers, women, men) in the upper right; and any combination of “good job” elements (earnings, health coverage, retirement plan) in the bottom pane. Start with the earnings threshold for all groups. Here we see decent gains for women, although no more than we might expect given gains in labor productivity and educational attainment over the same span. The share of men at this threshold, by contrast, falls—from about 57.5 percent in 1979 to about 54.5 percent in 2010. Selecting a single demographic underscores the contributing factors. For men, declining pension and health coverage (combined with flat earnings) led to steep decline in the good job share (from 37.5 percent in 1979 to 27.7 percent in 2010). For women, health coverage has fallen less dramatically (from a lower starting point), and pension coverage is pretty flat—and together they have dampened but not erased the gains in earnings (yielding a modest increase in share of good jobs, from 12.4 percent to 21.1 percent).
Sixteen years ago on Wednesday, President Bill Clinton surrendered to House Speaker Newt Gingrich and signed NewtAid, legislation that replaced the Social Security Act's Aid to Families with Dependent Children (AFDC) with a right-wing block grant scheme called Temporary Assistance for Needy Families (TANF). NewtAid/TANF was prematurely lauded as a success before it had even been fully implemented.
It is now clear TANF is a failed program that needs to be overhauled. NewtAid's failure can be seen most simply by comparing the number of children living below federal poverty line in 1992 and 2010. (These years are compared because 2010 is the most recent year we have child poverty numbers for, and 1992 is, like 2010, the first calender year after the end of a recession.)
Number of Children Living in Families with Incomes Below the Federal Poverty Line
1992: 15.3 million
2010: 16.4 million
In sum, just over 1 million more children lived below the poverty line in 2010, more than a decade after TANF's implemetation, than before TANF's implementation in 1992. If AFDC had remained in place and reformed along progressive lines, the number of children living in poverty would be much lower today than it was before NewtAid.
AFDC was far from a perfect program—especially after Reagan-era budget cuts limited the support it provided for working parents—but it was one of the dependable pillars of our system of social insurance (although nowhere near as strong a pillar as Social Security). Instead of strengthening AFDC, NewtAid tore it down and replaced it with a radical conservative scheme that has:
Given states incentives to spend billions of dollars in public funds—funds that had previously been used to promote economic security and opportunity for low-income parents—in an unaccountable and often irresponsible manner. In fact, the bulk of public funds available to states under TANF today are not spent on child care, employment services, or helping families meet basic needs. Instead, states have diverted the bulk of funds to "other services." In the most notorious cases, states have diverted TANF funds to finance unaffordable tax cuts.
Failed to provide adequate information on how states use TANF funds. As GAO has found, we know very little about how states are using 70 percent of the funds provided by the $16.5 billion block grant, including not just "results" but things as basic as how many families are being helped with these funds.
Deeply cut the actual amount of resources available through AFDC/TANF to help struggling, working-class families. Because block grant funding has remained frozen at its 1997 level, its actual value (adjusted for inflation) has fallen by nearly 30 percent.
Of course the HAMP and various derivative programs were complicated. They were made even more so by the fact that mortgage servicers really weren't set up to do modifications. The servicer process was set up to minimize costs by establishing standardized procedures. These outfits knew how to collect monthly payments, send out late notices, start default proceedings, and carry through foreclosures. They had specified payments for each step in this process. Working out loan modifications was not on the list, so it's not surprising that they didn't do a very good job, especially when we factor in complications like second mortgages.
There was an easier route. Early on in the crisis (August of 2007) I proposed the first version of my Right to Rent plan. The basic point was very simple. It gave homeowners the option to stay in their home as renters for a designated period of time following foreclosure (at least 5 years in my ideal world) paying the market rent.
This would accomplish several important goals. First and foremost it would instantly give homeowners facing foreclosure a substantial degree of housing security. If they had kids in school, the rental period would likely be long enough to let them finish. It would also give them some time to get on their feet financially and make arrangements for appropriate housing.
Second, it would prevent a blight of foreclosures from ruining whole neighborhoods. Renters who effectively have long-term leases have almost as much incentive to maintain the property as owners. At the least they would keep homes occupied so that they would not be eyesores and possible havens for crime.
Third, this right would give banks more incentive to find ways to modify mortgages to keep people in their homes as owners. Banks would generally prefer having a foreclosed house free and clear rather than being a landlord for 5 years. By making the foreclosure option less attractive, Right to Rent would make banks more willing to consider alternatives.
Right to Rent would not have required any massive bureaucracy. It could have been handled within the same legal framework in which foreclosures were handled. The main cost would be the payment for appraisers who would determine the market rent on a foreclosed home. This is the same process that appraisers follow when determining the market price of a home for someone seeking a mortgage.
Right to Rent also raised few of the moral hazard or political issues associated with various plans to have the government pay down debt. The former homeowner hardly gets a bonanza in this story. They get a right to stay in their home that they would not have otherwise had, in recognition of the extraordinary circumstances homeowners faced in the peak years of the housing bubble.
I was impressed that many moderate conservatives considered Right to Rent a reasonable solution to the housing crisis such as former Bush administration economist Andrew Samwick and Desmond Lachmon, an economist at the American Enterprise Institute. Even Fox business anchor Neil Cavuto sent me a note saying that he agreed with the idea.
Of course President Obama would have had to get Congress to pass such a measure. Would they have done it in early 2009? Who knows, there was a huge amount of goodwill directed toward the new president, who had sky high approval ratings at the time. Certainly it would have been difficult to push a Santelli type rant-fest against Right to Rent. After all, there were no taxpayer dollars involved.
In this area, as in so many others, the Obama administration was unwilling to really push anything new. Perhaps Right to Rent wouldn't have worked (we can still try it), but it certainly could not have done worse than the alternatives.
CEPR released a new report on Wednesday in conjunction with Rights Action on the circumstances and aftermath related to the May 11 shooting incident in Ahuas, Honduras, involving the Honduran police and DEA agents. Four local members of communities in this part of the Moskitia region were killed in the episode, and four others shot and injured. As was reported by the Associated Press in May, residents of the nearby village of Paptalaya were subsequently besieged by armed men whom residents described as wearing U.S. Army-style uniforms and speaking to each other in English.
CEPR’s Senior Associate for Policy Analysis Alexander Main traveled to Ahuas and Paptalaya, along with Rights Action’s Annie Bird and Karen Spring, and others, to investigate. During their July trip, they interviewed numerous survivors and eyewitnesses to the traumatic events, as well as U.S. Embassy officials and Honduran authorities. They also examined evidence, and talked to legal experts regarding the current progress, challenges and faults with the Honduran government’s delayed and flawed investigation into the incident. Their findings are the basis of the new 54-page report, “Collateral Damage of a Drug War: The May 11 Killings in Ahuas and the Impact of the U.S. War on Drugs in La Moskitia, Honduras.” It provides what is probably the most detailed account of the events so far. Among its key findings:
• U.S. Embassy officials contradict what State Department officials had previously stated about the DEA's role in the operation. Whereas State had said the DEA played a "supportive role only," both the former head of the DEA for Honduras, Jim Kenney, and U.S. Ambassador to Honduras Lisa Kubiske told the report's authors in separate conversations that Honduran police in these operations respond in practice directly to DEA officials. In addition, many eyewitnesses say it was North Americans, in uniforms with US flags on them, who were in the middle of everything, and that it was North Americans who besieged the Paptalaya village, holding residents at gunpoint and assaulting some of them. This would also contradict the “supportive role only” description.
This week, the Department of Labor (DOL) posted an invitation to states to apply for federal funds to promote work-sharing (officially called "short-time compensation") programs. The total amount available is almost $100 million, with the largest amount -- over $11.5 million -- available for California (see this chart for how much each state could get).
Only states that have work-sharing programs that fit the new federal definition in the Middle Class Relief and Job Creation Act of 2012 (Act) can apply for these grants. The Act places states into 3 categories:
states with existing work-sharing programs that fit the new definition
states with existing programs that don't conform to the new definition
states that don't have work-sharing programs
The federal grants to promote work sharing are divvied up for 2 purposes:
1/3 for implementation or improved administration of their programs -- such as upgrading processing systems
2/3 for promotion and enrollment activities -- such as outreach to and education of employers about work-sharing
Since work sharing is voluntary on the part of employers, publicity and outreach by states is key to improving participation rates. These grants should help jumpstart such efforts. DOL provides a handy application checklist and sample quarterly progress report, along with other useful information about applying. If states fail to take advantage of these grants and the federal reimbursements, they'll be leaving significant funding on the table, at a time of tight state budgets.
The Labor and Employment Relations Association is accepting proposals for "stimulating, creative, and controversial panels, symposia, workshops and papers" related to the theme of the organization's 65th annual meeting, "The Future of Work." Submissions can be from different disciplines – including but not limited to economics, sociology, political science, labor and employment law, industrial relations, and human resource studies – and different stakeholder perspectives, including investors, managers, employees, policymakers and unions.
Complete details about topics and how to submit a session proposal, paper or poster abstract can be found on the LERA website or contact
with questions. The deadline is October 5. The annual meeting will be held in St. Louis, MO, June 6-9, 2013.
The graph shows that over the last three decades the share of college-educated workers with what we define as a good job --one that pays at least $37,000 a year, has employer-provided health insurance, and an employer-sponsored retirement plan-- has actually declined slightly.
The graph also shows that over the same period, the drop in good jobs was even steeper for those with less education. But, what we and most of those commenting have focused on is the decline in good jobs experienced even by the best-educated part of the workforce. This group has “done the right thing” by the labor market, but is, nevertheless, now less likely to have a good job than was the case back in 1979.
Given the interest in “good job” trends by education, we thought we'd produce similar graphs showing the trends in the share of workers that separately meet each of the three underlying criteria. These graphs give a fuller picture of the forces driving the fall in good jobs within the education categories.