Review. Retrospective analysis and review in the US: why most Agencies do not apply Cost-Benefit Analysis properly

 

8560840624_c67c5c1fef_z

ABSTRACT. Given that President Obama’s Executive Orders on regulation have emphasized the importance of retrospective analysis and review of existing federal rules, I survey the state of retrospective analysis and review of federal regulations. I first ask how much is known about the economic merit of past federal regulatory decisions, based on retrospective economic analyses of their effects. I review reports of the Office of Management and Budget and related literature, but unlike those reports I find only five rules, issued by the National Highway Traffic Safety Administration (NHTSA), for which retrospective analyses provide estimates of both costs and reasonably good proxies for benefits (e.g., adverse health effects avoided). Other retrospective studies of federal rules estimate are incomplete, estimating only the compliance costs, or only the benefits, or only costs and measures of effectiveness, like emissions reductions, which do not closely relate to people’s welfare.

I also seek to explain differences in the practice of retrospective analysis and review between NHTSA, which appears to have the best record of retrospective analyses among federal agencies, and the Environmental Protection Agency (EPA), an important regulatory agency. I find that NHTSA regularly conducts such analyses and reviews, while EPA rarely does retrospective analysis and presents rulemakings that look like business as usual as being the result of a retrospective review. I analyze the role of data limitations, statutory authority, and institutional incentives in influencing the different behaviors of these two agencies. I conclude that differences in data availability and in particular the lack of relevant control groups, are an important barrier to retrospective analysis at EPA. This data deficiency, combined with important restrictions on EPA’s ability to consider information on net benefits or cost-effectiveness in its rule-making, are together the biggest hindrance to generating better information about the effects of its rules. I conclude with recommendations intended to generate more measurement of the actual effects of regulations.

Retrospective Analysis and review of existing regulation (from now on, RAER) is a key instrument for US ex post evaluations of regulation in force, based on estimations of net costs and benefits. Agencies are requested to conduct retrospective evaluations since President Reagan’s E.O. 12291, in 1981. More recently, the Obama Administration has issued three important executive orders (nn. 13563, 13579 in 2011, n. 13610 in 2012) putting new and enhanced emphasis on this need. Therefore, a relevant stream of studies and scholarly literature (symposia, papers and other contributions from the United States and abroad[1]) from both economic legal academies, has flourished over time, in order to assess, on one side, RAERs’ merit and functioning and, on the other, the agencies’ degree of compliance. Regulatory studies, in particular, are increasingly paying attention to these issues, as many

In his article, Randall Lutter focuses on the role of RAER for American regulation and the way it is carried out by federal agencies. In particular, he focuses on the use of Cost-Benefit Analyses (CBA), asking whether and to what extent they are accurate and rigorous. Indeed, despite the article’s title, L. specifically focuses on CBA, less on RAER broadly intended. In this respect, it is worth specifying that, along the reading, the author often appears to address either to CBAs properly understood or to RAERs in a broader sense, as if the two concepts were interchangeable and somehow overlapping. This can be misleading to some extent for those more used to legal regulatory studies and a bit less on economic ones, for whom ex post evaluations of existing rules may mean something more than quantitative assessments.

Lutter concludes that most agencies do not properly – or not at all – apply CBAs in their regulatory reviews, sometimes underestimating costs, or benefits, often both, and that in many cases they present traditional rulemaking as if it were the result of a rigorous retrospective review. In order to prove this, he compares RAERs carried out by two agencies: the National Highway Traffic Safety Administration (NHTSA) and the Environmental Protection Agency (EPA). Starting from the Office of Management and Budget’s (OMB) 2011 report on CBAs conducted by the agencies, and from other useful contributions from literature, which apparently legitimated agencies’ RAERs and validated their CBAs, he goes further to analyze individual RAERs, thus discovering that only NHTSA seems to have conducted correct and complete analyses in five cases, whereas EPA has rarely based its decisions upon thorough retrospective estimations. This is a key point in Lutter’s arguments, since he denies what the OMB itself states about agencies’ retrospective activities. In order to demonstrate this, the author also draws upon cross-checks between OMB’s and other sources’ information about past agencies’ CBAs.

Indeed, the case selection is quite debatable since it appears to meet with a twofold selection bias. Namely, as for EPA, on the one hand, among other more rigorous criteria (e.g. the EPA allegedly spends the largest amount of federal resources for regulation, given their nature of the managed topics), the author clearly affirms that there are subjective reasons behind it (as he says, he has «professional experience with EPA’s rulemaking», p. 20).  On the other, we learn that he has chosen NHTSA as the second case because «it has a longstanding and apparently unique practice of conducting retrospective analyses of its regulations» (p. 20), which sounds like being a selection on the dependent variable, generally deemed to be a relevant methodological mistake[2].

Such methodological dilemmas apart, Lutter’s empirical analysis of NHTSA’s and EPA’s retrospective analyses are accurate and objective. Therefore we assume that, except for NHTSA, EPA, like most of the other federal agencies, often overlooks a clear and complete analysis of the effective costs and benefits, also intended as avoided costs. So doing, knowledge of the economic merit of past federal regulatory decisions, based on retrospective economic analyses of their effects, is strongly limited.

Lutter provides three possible conditions for RAERs’ generally limited scope and diverging outcomes between EPA and NHTSA: data limitations, statutory authority and institutional incentives. He then argues that these conditions’ mixed combination accounts for EPA’s modest performance with retrospective CBAs. In particular, NHTSA relies on much more available and measurable data for its ex post evaluations, given its regulations’ nature and narrower scope. It is, indeed, quite easier to find control groups for counterfactual analyses dealing with minute transport rules, mainly devoted to regulate specific behaviors or vehicles’ technical features, than in such a broad and long-term oriented issue like environment and pollution prevention. This is by definition a hardly quantifiable field, especially when it comes to effective benefits or costs, be they avoided or yielded. Another hindrance for EPA to carry out proper ex post CBAs comes from its different statutory authority. In fact, as Lutter underlines, whereas founding NHTSA’s norms often recall the need to evaluate the effects of regulation, even by assessing its costs and advantages (the author provides exhaustive normative references for this), EPA faces substantial statutory constraints on the consideration of net benefits and cost effectiveness, given its general mission of citizens’ safeguard and health protection. It is clearly stated, for example, that drinking water regulation cannot in any case lower the level of normative protection for final users, so that it becomes harder for the agency to retrospectively assess benefits and costs, if the latter cannot be questioned. Finally, a third condition, lack of political incentives, present in both NHTSA and EPA (and, at larger, as Lutter explains, in all federal agencies) prevents them from doing accurate RAERs, since retrospective analyses often compete for funds with prospective ones, and, in general, can somehow doubt whether regulation has achieved or not its policy goals. This is not, in general, an incentive for agencies to carry out accurate retrospective analyses, since what really matters are the policy goals agency managers are rewarded for. And, as the author reports, they are seldom explicitly linked to sound analysis.

Given all these explanations, Lutter concludes that the reason why NHTSA accounts for being the best performer in terms of retrospective analysis and, more specifically, for its rigorous use of ACB, is «an unusual confluence of happy occurrences» (p. 34) for the first two conditions, which allows it to proceed with regulatory maintenance relying upon a consistent empirical basis.

Among the author’s recommendations for the development of such instruments among EPA, that we may find in section 4, it is worth mentioning his suggestion to offer monetary incentives to researchers that develop pilot studies or field trials able to increase data availability in topics traditionally poor in this respect (such as environment). We may then question whether such improvements could also prove to be useful to improve other agencies’ performance, provided his first statements about a general avoidance to make use RAERs are true.

(Federica Cacciatore)

Photo credits: HomeSpot HQ



[1] Our Observatory is often giving account of the increasing flow of knowledge coming from various American blogs and research Centres (see, for example, RegBlog, which hosts a section focusing on the analysis of ex post regulatory review, http://www.regblog.org/).

[2] See, for all, Barbara Geddes, How the Cases You Choose Affect the Answers You Get: Selection Bias in Comparative Politics, in «Political Analysis», 2(1), 1990, pp. 131-150.