Jump to ContentJump to Main Navigation
Green GradesCan Information Save the Earth?$

Graham Bullock

Print publication date: 2017

Print ISBN-13: 9780262036429

Published to MIT Press Scholarship Online: May 2018

DOI: 10.7551/mitpress/9780262036429.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use (for details see http://www.mitpress.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 20 June 2018

Being Green: The Effects of the Information

Being Green: The Effects of the Information

(p.181) 6 Being Green: The Effects of the Information
Green Grades

Graham Bullock

The MIT Press

Abstract and Keywords

Chapter 6’s discussion of the outcomes of information-based governance strategies begins with a comparison of three initiatives that evaluate electronics products – ENERGY STAR, EPEAT, and TCO. It introduces different conceptions of effectiveness, and emphasizes that different actors may have different definitions and perceptions of effectiveness. The chapter discusses a range of hypotheses and evidence related to the effects of information on consumers, businesses, government agencies, advocacy organizations, and researchers. While some evidence shows that a few existing programs have indeed created tangible social and environmental benefits, the database of 245 cases reveals that the vast majority of information-based governance strategies have failed to provide such information about their effectiveness to the public. The chapter ends with a discussion of promising and problematic practices for tracking the environmental outcomes and benefits of information-based governance strategies.

Keywords:   Audiences, Responsiveness, Stakeholders, Effects, Effectiveness, Outcomes, Transparency, Interviews, Government, Electronics

Evaluating Electronics

Vernon is a senior official at a state-level government agency, and has been tasked with identifying which electronics certifications and ratings his agency should use to identify preferable products in its procurement process. The agency has to make a large order of new computer monitors, and its staff members have a mandate for purchasing environmentally friendly products. The administrator of Vernon’s agency has expressed a strong preference for certifications that show they actually “make a difference.” The administrator agrees with Vernon that issue salience, organizational trustworthiness, methodological validity, and interface usability (the topics of chapters 2–5) are also very important. But at the end of the day he wants to be able to demonstrate specific and tangible benefits from these purchases to the people of their state, and wants his agency to use a certification or rating that ensures those benefits are being delivered.

The materials Vernon has reviewed suggest that there are three front-runners to focus on in the electronics space—ENERGY STAR, EPEAT, and TCO. Following the advice of the earlier chapters of this book, Vernon has already analyzed the usability, validity, trustworthiness, and salience of these three programs. But he is stuck on this issue of “making a difference.” Holding all of these other variables constant, how can he evaluate and differentiate between the programs’ ability to demonstrate their environmental benefits? He recognizes that while they may be easy to use, developed by trustworthy organizations, focused on important issues, and based on robust methods, they may still not deliver the tangible and exemplary results that his boss is looking for. But what should these results look like, and how extensive and specific should they be?

(p.182) He finds that certifications in the electronics sector are indeed making claims about their impacts. ENERGY STAR, for example, states that its displays are 25 percent more energy efficient than standard options. It also highlights that its certified office equipment, which includes computers, monitors, and printers, has saved more than 500 terawatt-hours (TWh) of energy and more than $50 billion in energy-bill costs in the United States.1 EPEAT, which not only includes criteria related to energy efficiency but also toxic substances, recycled materials, and other environmental concerns, states that its 757 million registered electronics products that were purchased between 2006 and 2013 have reduced 528,000 metric tons of hazardous waste, eliminated enough mercury to fill 4.6 million fever thermometers, and decreased the equivalent amount of waste produced by 248,000 U.S. households annually.2 It also claims that its certified products in 2013 alone will reduce 20 million kilograms of water pollutant emissions and 2.2 million metric tons of greenhouse gas emissions.3 TCO’s criteria are more comprehensive than those of ENERGY STAR and EPEAT, and include a broad range of metrics related to environmental performance, socially responsible manufacturing, labor rights, conflict minerals, and product ergonomics and usability.4 However, it makes no similar claims about the specific impacts of its certification on its website.

So which program is the most impressive? Which should Vernon recommend to his agency’s administrator? Vernon likes the breadth of TCO, but is concerned by its lack of information about its effectiveness. While he appreciates their efforts to be explicit about their impacts, he finds it difficult to compare the claims of ENERGY STAR and EPEAT. He realizes he needs a framework for thinking about and evaluating these claims. This chapter provides such a framework, and is designed to be helpful for policymakers such as Vernon—or activists such as Cathy, corporate executives such as Anu, academics such as Lynn, or consumers such as Mark—who are trying to evaluate the effectiveness of competing sources of sustainability information. For at the end of the day, Vernon’s boss is right—it is indeed the impact of these programs that really matters. All of the aspects of the information value chain that I have discussed in previous chapters (the issues covered, the methods used, the organizations involved, and the interfaces provided) ultimately must be oriented toward creating tangible environmental benefits. Otherwise these certifications—and the entire (p.183) enterprise of information-based governance—is at best a waste of everyone’s time and at worst actively causing harm.

This chapter begins with a discussion of several important concepts and distinctions that will be useful to us as we think about the impacts of these programs. They include audience responsiveness, effects, effectiveness, and the differences between inputs, processes, outputs, and outcomes. The chapter then summarizes the insights from sixty-eight interviews I conducted with a broad range of individuals about their perceptions of information-based environmental governance strategies. These interviews reveal a wide variety of ways in which the effects and effectiveness of these programs are framed and perceived. To explore what types of claims environmental certifications and ratings are making about their own effectiveness, I then introduce data on the outcome transparency of the cases in my Environmental Evaluations of Products and Companies (EEPAC) Dataset. The results demonstrate that these programs have not focused on publicizing their effectiveness to the public, and several types of outcomes are barely mentioned at all. They also show that EPEAT and ENERGY STAR are actually leaders in providing such transparency, and I discuss the most promising practices that they and other programs have employed.

The chapter continues with a further discussion of the tension between information pessimists and optimists, and how greater clarity about the goals of these initiatives and greater intentionality about measuring their progress toward those goals may help reduce this tension. Such clarity and intentionality are hallmarks of information realism, which are further elucidated at the end of the chapter. The chapter ends by returning to Vernon’s conundrum, and analyzes his options in light of the data and conceptual framework introduced in the following sections.

An Information Effectiveness Framework: Audience Responsiveness

This book’s previous chapters illustrate the complexity of the development process behind information-based governance strategies, from identifying issues to cover and data to use to building institutional relationships and creating interfaces to connect with users. But what are the effects and consequences of these strategies once they are implemented and begin releasing information to the public? If they are environmental governance (p.184) strategies, through what mechanisms do they actually have an impact on the environment, if at all? This chapter discusses the nature of information from this more consequentialist perspective, building on the philosophical position that the normative qualities of an action “depend only on its consequences.”5

The Importance of Audience Responsiveness

Because of the basic voluntary nature of information-based governance,6 audience responsiveness to these programs is the primary mechanism through which they act and have consequences. If audiences respond positively to the information, they may then pursue complementary forms of governance based on regulations, market dynamics, technological development, moral arguments, or additional information-based strategies. They then become, if they are not already, stakeholders in the initiative, which can be understood as “any group or individual who can affect or is affected by the achievement of the organization's objectives.”7 In order to be effective, information-based governance strategies need to target specific audiences and convert them into stakeholders who will positively contribute to their success. Consumers and institutions can change their purchasing behavior, manufacturers can introduce new technologies, government agencies can enact new regulations, and advocacy organizations can begin new campaigns, all in response to the information provided by these information-based strategies. The environmental performance related to the original focus of each particular information-based governance strategy can then be improved and a public or common good created.

The effects of these programs are therefore strongly mediated by the responsiveness of different audiences. Figure 6.1 illustrates this dynamic, and shows the potential for a wide range of potential actions that audiences can take in response to the information provided. While certain actors are more associated with particular responses (the solid lines), they may support and pursue other strategies as well (the dotted lines). The important point is that information-based strategies are not necessarily dependent on one audience (e.g., consumers) to be effective, but can stimulate a range of collective actions by several types of audiences to create public and common goods. Indeed, David Vogel, a professor of political science and business ethics at UC Berkeley, emphasizes the limitations of consumer-focused voluntary programs, given many consumers’ unwillingness to “internalize (p.185)

Being Green: The Effects of the Information

Figure 6.1 Information-based governance effect pathways.

Notes: Solid lines indicate responses most likely pursued by indicated actors, while dashed lines indicate other important actions each actor might likely undertake.

(p.186) the environmental externalities of what they consume,” and concludes that the real value of these programs is when they leverage more stringent and effective government regulation.8 This is a theme that we will return to later in this chapter.

Regardless of the type of action stimulated, the responsiveness of these audiences is likely to be strongly influenced by the salience, trustworthiness, credibility, and usability of the information provided, as described in earlier chapters. In addition to these factors, audiences are also likely to be affected by their perceptions of the effectiveness of these strategies, which in turn are likely to be defined in terms of the audience’s own interests.9 Different audiences are furthermore likely to have quite divergent interests. Government policymakers, for example, most likely view information-based initiatives from the perspective of whether they enhance their own authority and create a race-to-the-top “California effect” in which companies go beyond regulatory requirements, replicating the modeling effects of California’s regulatory leadership on other states.10 They are less likely to support programs that diminish their own power and create an alternative race-to-the-bottom “Delaware effect” in which states compete to create the least-regulated business environment (often deemed to be Delaware).11 Research on consumer motivations to buy green products, on the other hand, indicate it is the product’s performance, symbolism and status, cost effectiveness, and credible environmental claims that drive consumer willingness to pay for environmentally labeled products.12

Companies may conclude that an information-based initiative is effective if they view it as improving their corporate profits and stock prices through anticipated marketing benefits, attraction of new, more affluent customers, or increased satisfaction of existing customers.13 Reduced production costs, improved employee morale, preemption of regulations, increased costs for rivals, and improved opportunities for industry cooperation may also motivate companies to support these initiatives.14 Corporate responsiveness to information may also depend on factors that scholars Neil Gunningham (Australian National University), Robert Kagan (UC Berkeley), and Dorothy Thornton (UC Berkeley) identify as influencing corporate environmental compliance with regulations, such as community and activist pressures and companies’ environmental management styles.15 Alternatively, an effective initiative for a civil society activist ultimately needs to drive consumer pressure, government regulation, or direct (p.187) corporate action that results in improved corporate environmental performance and improved environmental quality.16 While overlap in their interests often exists, each of these actors likely has different underlying perceptions of the effectiveness of information-based governance strategies, which are summarized in table 6.1.

It should be highlighted that if an audience perceives a strategy as antithetical to its interests, it may decide to pursue strategies that aim to undermine it. Benjamin Cashore, Graeme Auld, and Deanna Newsom’s work at Yale University shows how the Forest Stewardship Council (FSC) label drove many U.S. foresters to support the alternative Sustainable Forestry Initiative label, which ultimately forced FSC to revise its standards.17 Thus it is important to pay attention to both the positive and negative feedback effects among different initiatives and different types of governance. The success of a voluntary certification, for example, may encourage the adoption of stronger industry standards and regulations, but it may also defer such standards and regulations as well. Different governance types should therefore not only be analyzed individually but also in terms of how they interact and either complement or undermine each other.

The Effects and Effectiveness of Information Regimes

An important question about the effects of these programs relates to the level of intentionality behind them. A results-oriented perspective would argue that intention is irrelevant, as long as positive results are achieved (however “positive” may be defined). A goal-oriented perspective would instead focus on the objectives of these programs and whether they are

Table 6.1 Potential factors driving perceptions of effectiveness


Potential factors driving perceptions of effectiveness


Perceived policy complementarity, legal mandate support, and environmental protection


Perceived improvements in product quality, cost effectiveness, health and safety, and environmental protection


Perceived improvements in efficiency, employee morale, customer satisfaction, competitor costs, policy preemption, industry cooperation, and environmental protection


Perceived environmental benefits through consumer pressure, government regulation, or direct corporate action

(p.188) achieved. From this perspective, other benefits are fine, but irrelevant if the explicit goals of the initiative are not achieved. These different perspectives echo the distinction that Harvard scholars Archon Fung, Mary Graham, and David Weil make between effects and effectiveness: “A policy has effects when the information it produces enters the calculus of users and they consequently change their actions. Further effects may follow when information disclosers notice and respond to user actions. A system is effective, however, only when discloser responses significantly advance policy aims.”18

An effective information-based program therefore has its intended effect, but it may have other effects (both positive and negative) as well. This distinction, however, raises the question of whose policy aims and intended effects are we referring to? As the comments from the interviewees that follow demonstrate, stakeholders have quite diverse opinions on what the goals of these initiatives should be. Both results and goal-oriented perspectives are therefore needed to evaluate information-based strategies. It is important to evaluate both the effectiveness of these strategies in achieving their stated objectives and their broader effects on society, individuals, and the environment. In some cases, they may fail to achieve their objectives, but still have important effects. In other cases, they may achieve their stated objectives, but have limited effects. This disconnect between effects and effectiveness may in part be due to poor, limited, or nonexistent goal setting by these programs.

One of the challenges of setting strong goals is the variety of ways to measure a program’s effects. As political scientists William Gormley at Georgetown University and David Weimer at the University of Rochester point out, these effects can be related to the resources they use (a program’s “inputs”), the means by which those resources are used (a program’s “processes”), the direct products of those processes (a program’s “outputs”), and the valued consequences of those outputs (a program’s “outcomes”).19 Ideally these effects are mutually and positively reinforcing, such that the use of high-salience and high-quality inputs (e.g., program criteria and data, the subjects of chapters 2 and 4, respectively) is associated with the use of valid participatory and analytical processes (e.g., governance incorporating relevant stakeholder groups and methods using life cycle analysis, the subjects of chapters 3 and 4, respectively). These processes are then in turn associated with meaningful outputs (e.g., certified (p.189) products sold) and outcomes (e.g., acres of forest preserved). However, such associations cannot be assumed, as these dimensions of performance may in fact lack linkages or even conflict with one another. Economical but shortsighted use of inputs, for example, may result in significantly weaker outcomes.

As discussed in chapter 3 and reinforced by work by Sébastien Mena and Guido Palazzo, stakeholders may evaluate the legitimacy of organizations on the basis of any of these dimensions.20 While Gormley and Weimer define an organization’s performance as “its impact on outcomes,” they also acknowledge that this conception of performance is difficult to measure, often because the organization is far removed from its ultimate outcomes or it is impossible to measure those outcomes comprehensively or weight them precisely.21 This explains why many metrics of effectiveness are based instead on input, output, and process variables—or on less than ideal outcome variables. Each of these options have their strengths and limitations, and stakeholders may have different preferences for them. Funders may focus more on efficiency and inputs, while communities that believe their voices have not been heard may emphasize participation and processes. Meanwhile, managers who prefer specific production targets to achieve may favor output-based metrics, while advocacy groups may emphasize the need for specific environmental and health outcomes.

These different aspects of performance highlight the fact that effectiveness is ultimately in the eye of the beholder. How effective a program is perceived to be may depend a great deal on who you ask. As emphasized earlier, the fact that information-based governance strategies are essentially voluntary in nature makes audience perceptions of these strategies even more important. If audiences perceive the strategy as being effective, they may be more likely to respond to the information it provides, which in turn can further increase its effectiveness. The logic of these feedback effects is supported empirically by experiments by Boston University marketing professor Sankar Sen and his colleagues Zeynep Gürhan-Canli (University of Michigan) and Vicki Morwitz (New York University). Their work shows that consumers are more likely to participate in a boycott if they view it as effective.22 Thus it is valuable to understand how different audiences perceive these programs and what their more general effects are. It is also important to identify how these programs themselves define their effectiveness and what factors they believe may be driving that effectiveness. Through this (p.190) process, the roles of different mechanisms by which certifications and ratings may be contributing to the creation of public goods and governance efforts can be better understood. It can also allow us to explore the possibility of identifying a unifying concept and measure of effectiveness across multiple contexts and sectors for these programs.

Perceptions of “Green” Effectiveness: Interviews with Stakeholders

This section presents insights from sixty-eight interviews with consumers and representatives from companies, nonprofit organizations, government agencies, academic institutions, and organizations behind several different ratings and eco-labels. I consider all of these individuals to be important stakeholders of these programs because they have the capacity to both affect and be affected by their outcomes. In the sections that follow, I first describe the methods used to select the interview participants and to conduct the interviews, and then discuss the interviewees’ views on the effects and effectiveness of product eco-labels and corporate green ratings. This research identifies a wide range of both effects and measures of effectiveness articulated by these participants. While clear environmental outcomes were the most commonly cited metric of eco-label effectiveness, respondents did not agree on any single overarching definition of effectiveness for these types of programs. These interviews provide a relatively comprehensive view of how different audiences—consumers, activists, regulators, executives, academics, and raters themselves—perceive the dynamics and consequences of eco-labels and sustainability ratings.

Interview Methods

For these interviews, I selected a stratified sample of consumers and representatives from nonprofit organizations, companies, government agencies, academic institutions, and evaluation organizations. In total, I interviewed sixty-eight individuals for approximately one hour each. The interviews with organizational representatives focused on understanding their perspectives on the effects and effectiveness of product eco-labels and corporate environmental ratings that they were already knowledgeable about, while the consumer interviews presented information about several labels and ratings to them and then explored their impressions of them. I chose to interview representatives from each of these groups in order to hear (p.191) from a wide range of individuals with different backgrounds and to better understand the similarities and differences in their views of eco-labels and green ratings. Quotes included in the sections that follow are only identified by the interviewee’s type of organization (company, nonprofit, etc.) and thus do not identify specific individuals. Before discussing their specific comments, I will first summarize the backgrounds of the participants in my interviews—for a full description of my sampling methods, see appendix II.

In selecting the company representatives, I limited the sample to staff working at companies in the consumer electronics sector. The consumer electronics industry has a long history and wide range of eco-labels and green ratings, and provides a rich case study from which we can learn. I interviewed representatives from nine companies, including the #1 seller of music products (Apple), the #1 seller of personal computers (Dell), and the #1 seller of audio-visual equipment (Sony).23 I also interviewed individuals working at nine organizations who are implementing eco-label or green rating initiatives related to the electronics sector. They included individuals involved with the implementation of 80Plus, ENERGY STAR, EPEAT, Greener Electronics Guide (by Greenpeace), and TCO Certified.

For the other stakeholder groups, I did not limit my sampling to the electronics sector, primarily because not as many individuals in these groups are exclusively focused on electronics. For government stakeholders, I was able to conduct interviews with sixteen individuals representing one congressional agency (the Government Accountability Office) and three executive agencies (the Environmental Protection Agency [EPA], Federal Trade Commission [FTC], and Department of Energy [DOA]). I also interviewed representatives from ten nonprofit organizations, including Rain-forest Alliance, World Resources Institute, Consumer Federation of America, Union of Concerned Scientists, and EarthJustice. And I conducted interviews with twelve academic researchers with expertise in either electronics or eco-labels and with backgrounds in economics, political science, public policy, marketing, or engineering. They come from a range of institutions, such as Arizona State University, Ohio State University, Harvard University, and the Georgia Institute of Technology. I also interviewed a diverse sample of twelve consumers that included six men and six women, seven age 40 or older individuals and five under 40, and three high school-educated, two in college, four college-educated, and three graduate school-educated. (p.192) The consumer participants also have a range of annual household incomes, from less than $25,000 to more than $100,000. The final sample of interviewees is presented in table 6.2. The sections that follow summarize their perspectives, beginning with the perceived effects of information-based environmental governance strategies.

Perceived Effects of Eco-Labels and Green Ratings

After going over their knowledge and impressions of existing product eco-labels and corporate environmental ratings, I asked all of the interviewees an open-ended question about what they thought the effects of these kinds of programs have been. I then followed up with more specific questions about their effects on the policies and behavior of companies, government agencies, nonprofit organizations, and consumers. I also asked whether they believed these programs have undermined or complemented other environmental policy initiatives, such as regulations. While the sections that follow highlight the range of perspectives among representatives from different stakeholder groups, I did not detect any systematic differences across those groups.

Company Effects

The main effect on companies that corporate representatives cited was the role of eco-labels and ratings as a “motivational tool.” One manufacturer representative stated that these programs are “definitely driving design decisions,” and are highly influential in manufacturing processes. Another noted that there is an “absolute need for [such] aspirational

Table 6.2 Interview sample summary

Interviewee background


% of total

Nonprofit Organization






Academic Expert






Government Agency



Evaluation Organization









(p.193) standards.” One retailer representative stated that he believed these initiatives have motivated manufacturers to perform better, and have allowed retailers to effectively promote the environmental and energy efficiency benefits of certain products. Another noted that they are “geared to many different audiences”—some, such as ECMA 370, are oriented toward businesses and procurement officers who know what they are looking for, and are not designed for the general consumer.

Other stakeholder representatives expressed similar sentiments. One government representative stated he believed that these programs have encouraged companies to make greener products, while a second said he thought one of their biggest effects was “innovation stimulation.” A third asserted the specific effects were that they “taught companies not to be afraid [of sustainability efforts]” and “how to make money from [greener products].” A fourth government official, however, expressed a more skeptical view—that the actual results of these programs are mixed and that while many make companies feel good and give them a “green badge of courage,” in reality they do “squat.” A fifth stated that while they have done some good, their contribution has been very limited in the broader context of environmental policy.

Representatives from nonprofit organizations expressed similar caveats about these programs, but in general were positive about their effects on companies. They stated that companies “take them seriously,” “pay attention and are motivated by them,” and are incentivized by them to improve their performance. One of the consumers interviewed thought that corporate leadership is an important mediator of these effects—“I think that in general companies are being pressured in trends in political consciousness to create a rating system … and based on who runs the company [and] who is associated with it, that's going to [determine] how effective it is.”

Consumer Effects

Nonprofit organization representatives were also relatively positive about the effects of these initiatives on consumers. For some, the best eco-labels and ratings are “quick tools” that “empower consumers” and provide “information resources” to consumers. Others asserted that eco-labels have made the issues they cover, from climate change to deforestation, more familiar to consumers. Several of the academic experts on consumer behavior interviewed expressed similar attitudes—one stated that these initiatives have done a “decent job matching consumers and (p.194) producers,” and another cited a specific example from his own research that showed product sales increasing after an eco-label was introduced.

Nevertheless, some participants expressed reservations about the effects of these programs on consumers. One government official said that even one of the most successful eco-labels, ENERGY STAR, still did not cover much of the market (in actuality, ENERGY STAR’s market penetration varies widely, from 0 percent for small scale servers to 100 percent for cable boxes, and depends on a variety of factors).24 Another asked rhetorically, are these programs “a drop in the bucket or a huge success?” and answered his own question, “Hard to say.” Several representatives from the companies, evaluation organizations, and other groups expressed concerns about the effects of “eco-label proliferation” on consumers. Such proliferation, in their eyes, might be causing confusion, disillusionment, and “green fatigue” among shoppers. However, others did not see a problem with this expansion, and believed that this phenomenon is still in its infancy and only covers a fraction of what it should be covering.

As evidence against an enduring overload effect, one respondent cited the example of nutrition labels—when they are first introduced or when people first encounter them, they may seem overwhelming, but once people become familiar with them they are able to “filter out” the extraneous information and focus on what is important to them (vitamin A vs. calories vs. sugar content). Another respondent, however, used nutrition labels as an example of how providing lots of detailed information has been overwhelming and has not had the intended effect—despite the introduction of these labels, obesity levels have increased over the last twenty years.

What do consumers themselves say? Those that I interviewed expressed a range of views, but in general were positive about these programs. When asked whether they would make use of the eco-labels they learned about in the interview, one said, “I think I would take them into account, but I wouldn't go to the ratings as my first stop … I would probably narrow it down to a few washing machines, and then I might see if they are on a list of labeled or ratings products.” Another said she thought “there should be more of them—they should be standards for what we buy,” and another concluded, “I would want to use a combination of them, as none of them covered what I wanted. I felt they were incomplete, but now that I know I would definitely want to look at them.” But a fourth participant (p.195) remarked, “I might compare one or two but not all of them, TMI [too much information]!”

Government Effects

The most commonly cited effect on government was the use of eco-labels as procurement standards. In 1999, for example, President Clinton issued an executive order mandating all federal agencies to select ENERGY STAR-labeled products.25 In 2007, President Bush issued a similar order requiring federal agencies to buy EPEAT-registered products for at least 95 percent of their needs.26 More recently, President Obama signed Executive Order 13693 in 2015 that requires federal agencies to “promote sustainable acquisition and procurement” by purchasing products whenever practicable that are certified not only by ENERGY STAR and EPEAT but also by the BioPreferred, WaterSense, Safer Choice, and SmartWay Programs (as well as other products identified by the EPA or DOE as “energy and water efficient”).27 These orders have forced government agencies to be leaders in procuring certified products, and have created an important market for them.

Other interviewees mentioned the greater efficiency that these voluntary initiatives have over traditional regulatory processes—they are much more informal and enable conversations with industry, nonprofit organizations, and even other countries that do not normally happen in the more adversarial and bureaucratic regulatory process. One government official stated, however, that she believed these programs actually are less efficient and more expensive than traditional regulatory processes, because they take a lot of time and money to collaborate with industry and other groups to jointly develop their standards. The length of time that it can take to complete a regulation or a voluntary standard can vary significantly, and interviewees disagreed as to which takes less time on average. Another official thought these voluntary programs can often be a distraction from the mission of the EPA, which is to “protect human health and the environment.” This relates to the more general issue of whether these initiatives complement or undermine regulatory efforts, which I will return to later in this chapter.

Nonprofit Effects

Several participants noted that eco-labels often create divisions within the advocacy community, where some are positive and optimistic about them and others are more skeptical and pessimistic. This (p.196) dynamic leads the former to be more engaged in these efforts, while others remain critical and focus on other strategies. One advocacy organization representative noted that even though his organization has been involved in creating a green rating program, it “was not in isolation from other ongoing projects, [such as] pushing for state laws, working with purchasers, etc.”

Another said that one criticism of these initiatives is that “NGOs are often outgunned and outweighed in their development processes,” as it is the companies who have the resources and staff to participate in ongoing meetings and workshops around standard-setting and criteria development. For some of the government participants, this issue underscores the importance of using weighted voting procedures during the standard development process. Such procedures limit the voting power of any one particular stakeholder group to 33 percent, for example, or no greater than 50 percent. Such rules underscore the importance of defining the stakeholder groups in a manner that balances the different interests appropriately. These participants also highlighted the value of practices employed by organizations such as NSF International that pay for the travel and meeting expenses of nongovernmental organizations to attend their standards development meetings.

General Effects

Several other, more general effects of these programs were cited as well. Citing consumer surveys commissioned by his agency, one government official asserted that general claims of environmental friendliness or greenness create confusion and skepticism among consumers, and therefore specific claims about environmental attributes are more appropriate and helpful. Several interviewees, and in particular two academic experts, expressed concerns about the unintended consequences of these programs and their potentially negative effects on environmental protection efforts in the long term. As an example, one interviewee said LEED’s point system may encourage tearing down buildings, which may not be the best environmental outcome.

I also asked every interviewee about another potential general effect of these programs, which is whether they complement or undermine other forms of environmental governance, and in particular environmental regulations. The majority of the respondents believe that eco-labels and green ratings complement regulatory efforts, although there were some (p.197) strong minority opinions. On the complementary side of the argument, one government interviewee described an important downside of regulation that voluntary programs can address. In the building industry, for example, regulations create “perverse incentives” that encourage “builders to treat building codes as the maximum they are supposed to do.” Their goal becomes minimizing their efforts at compliance, and therefore performance and enforcement greatly depend on the diligence of the inspector. Voluntary ratings and labels attempt to change this dynamic and create competition among builders in going beyond compliance. In this way, the regulatory code can become the floor of performance, rather than the ceiling.

This logic was echoed by many other interviewees, although some emphasized that the extent to which labels work in this manner depends on important contextual factors, such as the expense and difficulty of meeting the voluntary standards, the threat of further regulatory action, and the culture of the industry. One participant involved in the electronics sector stated, for example, that the competitive culture of his industry had made it more amenable to competing on environmental criteria, which may not necessarily occur in other sectors. Most interviewees therefore emphasized the complementary relationship between voluntary and regulatory programs, and that both are needed to improve environmental performance. Several argued that the key is to ensure that the voluntary standards that begin as goals are ultimately transformed into expectations for the entire industry.

A few respondents, however, expressed skepticism about the extent to which this occurs, and cited the opposite phenomenon as being just as likely—“successful” voluntary programs providing an excuse for not passing and implementing more extensive regulations. One respondent cited ENERGY STAR as an example of this dynamic. Even though it has certified a large proportion of products in a range of different product categories and has raised its standards for many of those categories, there are still many products on the market that do not meet even the original ENERGY STAR standards. This government official claimed that the program is nevertheless seen as successful, and is used as a strong argument against further regulation: for example, “Why are government standards for these appliances needed when we have ENERGY STAR?” Other participants asserted that the lack of full market penetration of programs like ENERGY STAR (p.198) demonstrates the need for mandatory standards (set by government agencies such as the DOE) to set a performance “floor” for all products, certified or not.

Definitions of Effectiveness

A factor driving these debates may be differences in how these participants define “successful” or “effective.” Indeed, differing perspectives on the nature of effectiveness may explain why some interviewees emphasize particular effects of eco-labels and de-emphasize or ignore others. I therefore also asked all of the participants in my interviews to explain how they themselves define effectiveness, and what it means to them in the context of environmental certifications and ratings. The sections that follow summarize their responses.

Environmental Outcomes

The most common definition used in the interviews focused on the environmental outcomes of the program. Some participants answered in the form of questions, such as “Does it improve the environment?” or “Are they solving some specific problem?” Others said they must be evaluated in terms of their “observable environmental improvements,” “overall benefits,” “physical benefits,” or “making an impact.” Several participants put effectiveness in the context of the goals of the program, asking “What are the environmental impacts they are trying to reduce?” and “Does it achieve their objective—whatever they set out to accomplish?” Others cited specific metrics of performance, such as a “net reduction in CO2 emissions.” One academic expert stated that the standards need to “be strict enough that their impacts are significant,” while another emphasized that they must focus on present and past performance, not future expectations. An advocacy organization representative emphasized, however, that the standards should take into account the goals of companies as well as their past performance, but need to penalize them when they retreat from those goals. He also emphasized that their standards must be “beyond what is required by law,” and result in “transformative change.”

Consumer Behavior Outcomes

The second most commonly cited definition of effectiveness relates to changes in consumer behavior. Common phrases included: “Does it change consumer behavior?” “Has it caused (p.199) a shift in consumer demand?” “Do consumers recognize it?” “Do they motivate purchasers to change their decisions?” “Do they help consumers identify a recognizable brand message?” Others mentioned more specific metrics, such as the share of a market that an eco-label has certified. One company representative said effective initiatives must “actually result in sales of products that are better for the environment,” explicitly linking this focus on consumer behavior to the environmental outcomes discussed in the previous section. A nonprofit representative emphasized the ability of these programs to “resonate with consumers,” implying that eco-labels must be salient and relevant to consumers in order to be effective. One participant emphasized the difference between product eco-labels and corporate “scorecards,” which she asserted differ in their audience orientation. Eco-label effectiveness should be measured by the labels’ market penetration because that is their orientation, but scorecards are less consumer oriented and should be evaluated differently.

Company Behavior Outcomes

Along these lines, other interviewees emphasized that these information-based governance strategies can also be effective by eliciting changes in company behavior directly. As one academic representative asked, “Does it change company behavior?” The main point here is that rather than operating indirectly through consumers and markets, these programs can influence companies themselves as “effective campaign tools” that allow advocacy groups to “go after individual companies,” as one nonprofit representative explained. Another nonprofit representative said that people should realize that scorecards and ratings used in this context are “designed to be opinionated and subjective” and are used to make a point about society’s values. They are not meant to be a full and final scientific assessment of a company’s environmental impact.

Other interviewees emphasized that effectiveness can also be defined in terms of specific changes in company behavior, such as being more transparent about their product’s manufacturing processes or ingredients. Encouraging companies to “really do innovation” and bring new green products to market was also mentioned as a dimension of effectiveness, as was a broader effect of “promoting competition” among companies on green attributes. Others mentioned that some company-supported labels and rating systems are more internally oriented in order to motivate and (p.200) organize a company’s environmental management efforts. Other programs are focused on enhancing communication and collaboration among companies so that lessons learned are shared and a sense of industry momentum is created. Another aspect of changed company behavior discussed was procurement—programs that are regularly used by corporate procurement officers may also be viewed as effective.

Public Policy Outcomes

Rather than focusing on consumer or company behavior, several interviewees noted the importance of changes in public policy as a measure of effectiveness. As one NGO interviewee asked, “How does it influence policy?” Another interviewee who has been involved in producing an environmental ranking of companies stated that the goal of that effort was “not to affect consumers but to impact public policy.” These interviewees emphasized that such ratings and rankings can raise awareness of the issue in question, and create demand for stronger regulations. In this context, one of these participants highlighted the importance of having results that are interesting to the media, which can then raise the profile of the initiative and attract attention from policymakers.

Awareness and Education Outcomes

Some interviewees also mentioned a more general measure of effectiveness that was unconnected to any specific audience. This measure was increased “awareness and education” about the issue in question and the environment more generally. Does it educate consumers, policymakers, or executives about corporate or product environmental performance? Does it increase awareness about the importance of the environmental impacts of consumption and production? Such a definition of “effectiveness” implies a longer-term and more indirect mechanism of social change and environmental progress—through learning and sensitization over time. An emphasis on this type of outcome might justify easier-to-achieve standards if it raises awareness among key stakeholder groups and introduces key issues to a significant portion of the public.

Knowledge and Information Outcomes

Similarly but more specifically, other participants emphasized the intrinsic importance of the accuracy of the information provided by the initiative. As one interviewee stated, it “must be credible”; or another, “it must be verifiable”; or another, it must (p.201) have “quality control.” On the surface, such statements may appear to be more descriptions of drivers of effectiveness than definitions of effectiveness itself, and indeed some interviewees did appear to conflate the proximate drivers and the ultimate goals or definitions of effectiveness. On a deeper level, however, increasing society’s knowledge and the quality of information about the environmental performance of products and companies may indeed be a goal in itself. With such information, policymakers and citizens can make better decisions about whether the environmental impacts of a product’s performance are significant and which areas of performance are most important to address.

Another participant asserted that programs must be able to differentiate between companies and products on their environmental performance so that audiences can effectively choose between them. Others mentioned the importance of creating “simple,” “clear,” and “easy to understand” information. These are slightly different goals than information accuracy or quality, as in some cases slight differences found between two products or simplified data presentations may not be statistically significant, defensible from a scientific perspective, or important relative to other aspects of environmental performance. But it is additional information that may still be considered useful by some audiences and may incentivize further efforts to improve performance.

Process Outcomes

Several interviewees also mentioned specific attributes relating to the processes by which eco-labels and ratings are created as metrics of effectiveness. Again, these may be interpreted as drivers and not definitions of effectiveness, but they also can be seen as ends in themselves. The first such attribute mentioned was related to trust—that “people know it and trust it.” Building a trustworthy eco-label, and a trustworthy process behind it, can not only build the public’s confidence in claims about particular products but also about environmental issues more generally. How such trust is built is of course another question, although another participant also emphasized the importance of democratic processes, which perhaps is one factor that can contribute to building such trust. But the fact that an eco-label was created with input from a wide range of voices in a democratic manner may also be an explicit goal as well—democratic decisions are often seen as more valid and legitimate, regardless of their content and outcome. And finally, one participant defined effectiveness (p.202) in terms of the long-term “durability” of the program. “Is it built to last over time?” Will these programs be around in forty or fifty years? Such durability can obviously contribute to other dimensions of effectiveness, but creating an institution that persists over time can also have independent value, as it becomes an established source of benefits and sustained progress for society.

Outcome Transparency of Information-Based Environmental Governance Strategies

Clearly the stakeholder representatives I interviewed described a wide range of real and potential outcomes of information-based governance strategies. The importance of these different forms of effectiveness depends on not only your particular interests and background, but also whether you value more direct and tangible outcomes that may be more limited in scope, or more indirect and intangible outcomes that may have a broader scope. Other trade-offs exist as well. For example, specific environmental outcomes ultimately may be preferable to some groups, but also more difficult to measure than other types of outcomes—such as consumer awareness or purchases. These trade-offs and the many different forms of effectiveness mentioned raise the question of what types of outcomes are existing initiatives focusing on in their communications to the public? To what extent are they claiming to have produced the different types of effects highlighted by the interviewees in the preceding sections?

In order to address these questions, my research assistant and I conducted a content analysis of the website text of the 245 cases found in the EEPAC Dataset. Through this analysis, we identified text that mentioned either existing or potential outcomes associated with the case in question. Two levels of such outcome transparency were identified. The first was limited outcome transparency, which includes any general claims regarding the potential social or environmental benefits of the initiative (e.g., a computer with this label has a 30 percent smaller carbon footprint). Strong outcome transparency, on the other hand, includes specific claims regarding the actual benefits from the initiative (e.g., number of trees saved through an eco-label). Slightly more than 10 percent of the initiatives make specific claims regarding the actual benefits they create, while nearly 20 percent make general claims about their potential social or environmental benefits (p.203) but do not discuss actual outcomes. Over 70 percent do not mention either real or potential outcomes of their program.

I then further coded the identified text for the more specific types of outcomes discussed earlier. As figure 6.2 shows, the most commonly mentioned type was environmental outcomes (22 percent of all cases). Approximately one-third of these claims were about specific outcomes that the initiative had itself created. The next most common type was company outcomes (19 percent of all cases). Nearly two-thirds of these claims were more limited and general statements about potential benefits. Only 7 percent of the cases directly mentioned any consumer outcomes, and only a handful discussed outcomes related to awareness and education (five cases), knowledge and information (six cases), public policy (two cases), or process (one case) outcomes. None of these latter four types of claims were coded as having strong outcome transparency—all were limited and general in

Being Green: The Effects of the Information

Figure 6.2 Types and levels of outcome transparency.

Note: Figure shows seven types of outcomes coded for in the website texts of the cases. Light shading indicates cases having limited transparency about these outcomes, dark shading indicates cases having a high level of strong transparency about these outcomes. Error bars indicate 95 percent confidence intervals for each sample proportion.

(p.204) nature. Any outcome transparency can increase the output legitimacy of these initiatives that was discussed in chapter 3, but the strong form is more likely to do so. However, it is important to remember that neither are measures of actual effectiveness, only the extent to which these organizations are making claims of effectiveness.

Some examples of these different claims of effectiveness highlight their diversity. As an example of a strong environmental outcome claim, the Forest Stewardship Council (FSC), mentioned in chapter 3, provides a four-page literature review of independent research on its “Impact in the Forest.” The review cites, for example, studies showing that deforestation rates were twenty times lower in FSC certified areas than noncertified areas in Guatemala. An example of a limited environmental outcome claim was found in Earth Advantage’s discussion of its home certification program. It states that every Earth Advantage home is designed to improve energy efficiency by 15 percent, but does not provide any estimates of actual energy savings or pollution reduction. Strong claims of company behavior outcomes were made by the Rainforest Alliance certification and Smithsonian’s Bird Friendly Coffee programs—they both provide specific statistics on how many farms and hectares of land they have certified. The Corporate Responsibility Index provides an example of a limited claim of company behavior change—it states that its feedback reports enable companies to “identify areas for improvement and ensure efforts focus on areas of maximum impact.” Sounds good, but no evidence is provided to suggest that this actually happens.

In terms of consumer behavior outcomes, EPA’s WaterSense program makes a relatively strong claim, stating that it “helped consumers realize more than $55 million in water and sewer bill savings.” An example of a more limited claim of consumer behavior change is Food Alliance’s statement that its certification results in “positive customer feedback,” “increased customer loyalty,” and “sales increases,” but no specific evidence of these effects is provided.

An example of a limited public policy outcome claim comes from ISO 14001, which states that its standards “provide the technological and scientific bases underpinning health, safety and environmental legislation” and “are the technical means by which political trade agreements can be put into practice.” An example of a limited awareness and education outcome comes from the Corporate Lands for Learning certification, which claims (p.205) that it fosters “a clear understanding of the interdependence of ecology, economics, and social structures in both urban and rural areas” in both children and adults. The 100 Best Corporate Citizens provide an example of a limited knowledge and information outcome with its quote from Intel’s Director of Corporate Responsibility stating that the initiative has had “a huge impact internally” at Intel and his colleagues view its scores and rankings as a significant “learning opportunity.”

Responsible Travel provides an example of a limited process outcome claim in its annual Responsibility Report, which details its vision, targets, outcomes, and next steps across nine major areas, from its membership and customers to its local community and the broader tourism industry. It states that it initiated a debate about the Sustainable Tourism Stewardship Council’s (STSC) plans to develop Global Sustainable Tourism Criteria, and recruited more than eighty individuals to sign a petition demanding more transparency in the process. While not citing any specific outcomes, this action represents an effort to make the development of a new tourism standard more open and democratic. No instances of strong transparency claims about processes, public policy, knowledge and information, or awareness and education were found in the dataset.

As has been done in previous chapters, figure 6.3 maps out the landscape of outcome transparency across all 245 cases in the EEPAC Dataset. Due to their collective low occurrence, it groups the public policy, knowledge and information, awareness and education, and process outcomes in one general category of “indirect” forms of effectiveness, and plots it with the consumer, company, and environmental forms discussed earlier. Limited and strong forms of claims are grouped together in order to make the figure easier to read. It shows the 70 percent of the cases (172 in total) in the upper left-hand corner that do not make any claims about their effectiveness. The fourteen other cases in the top row make no claims about indirect outcomes or environmental outcomes while the fifteen cases in the far-left column make no claims about consumer or company outcomes. The second highest pillar represents twenty-five cases that make no assertions about consumer or indirect outcomes but do make some assertions about environmental and company outcomes. Overall, twenty-five cases in this landscape make one of these four types of claims, while thirty-seven make two of these types of claims and nine cases make three types. The two cases in the bottom right corner of figure 6.3 are FishWise (p.206)

Being Green: The Effects of the Information

Figure 6.3 The landscape of outcome transparency.

Note: The 172 cases in the back left-hand corner make no claims about having produced any consumer outcomes, company outcomes, environmental outcomes, or indirect outcomes. The two cases in the front right-hand corner claim to have produced all four of these types of outcomes.

and Responsible Travel, and they make all four types of claims about their effectiveness.

The Information Realism Perspective

The interviews summarized in this chapter contained strains of both information pessimism and optimism. Some participants viewed eco-labels and sustainability ratings as having positive effects on corporate behavior, serving as a “motivational tool,” a source of “innovation stimulation,” and a mechanism for learning about the opportunities associated with sustainability efforts. Others described their beneficial effects on consumers, increasing their familiarity with environmental issues and increasing sales (p.207) of environmentally labeled products. Some interviewees highlighted the increased procurement of these products by government agencies. They also emphasized the greater efficiency of information-based initiatives compared to conventional regulations, which has enabled government agencies to be more collaborative and adaptive as they pursue these approaches. Other participants expressed confidence that environmental certifications and rankings complemented rather than undermined existing regulatory approaches.

However, expressions of pessimism were not uncommon in the interviews. Some interviewees voiced concern about the mixed and limited results of these information-based efforts in terms of both consumer and corporate behavior. Others highlighted the problems of information overload and consumer confusion, the unappreciated costs of collaborative standard development processes, the dominant role industry plays in those processes, the distracting effects of these programs on regulatory agencies, and their potential unintended consequences. Several participants were skeptical of a complementary effect between regulation and information-based strategies, and were more concerned about the latter serving as a poor substitute for the former.

The data on outcome transparency presented in this chapter would likely be heralded by these information pessimists as further evidence of the inadequacy of information-based governance. Over two-thirds of the cases make no attempt to discuss their outcomes, and nearly two-thirds of those that do limit themselves to general and unsubstantiated claims about their effectiveness. No third-party verification is provided for the vast majority of the more specific claims that are made; we are supposed to trust these self-evaluating organizations even though they have a clear conflict of interest. Better to not trust any of them, and invest in real governance strategies that will create real results. Or so the information pessimist might argue.

Information optimists, however, would probably view the data with a more forgiving eye. Measuring the effects and effectiveness of any governance strategy is challenging and fraught with difficulties, and we should not expect anything different with information-based approaches. On the contrary, given the complexity of their effects and the multiplicity of mechanisms by which they accomplish their goals, we should assume that evaluating their effectiveness will be even more difficult, and should make (p.208) appropriate allowances. These information optimists would assert that many of the outcomes that the interviewees identified, such as educating the public, influencing corporate behavior, and catalyzing public policy, are inherently broad, diffuse, and multicausal, making them particularly resistant to measurement. But that does not mean they are not worth pursuing, not least because of their breadth and generality. Just because something cannot be confirmed by science does not necessarily mean it is not effective.

These optimists would also point out that the modern phenomenon of information-based governance is relatively new and that these programs are relatively young, and need more time to both generate results that can be measured and create methods to measure them. With this perspective, it is therefore truly impressive that so many cases have at least made the attempt to document their outcomes, even if many are still aspirational and not yet quantified. The specific claims that some of the cases do make are often nontrivial and quite impressive, whether it is the number of hectares certified by the Forest Stewardship Council or the Sustainable Forestry Initiative, the number of buildings certified by LEED or Green Globes, or the number of acres certified by USDA Organic or Rainforest Alliance.

An information realist would acknowledge all of these points, and would agree that evaluating the effects and effectiveness of these initiatives is indeed challenging. She would also assert that regulatory approaches are often not held to these same standards of performance. However, that is no excuse for information strategies to back away from such standards; instead, they must embrace them fully and pursue them tenaciously. The stakes are too high—both in terms of their own viability and the seriousness of the environmental challenges they are purportedly trying to solve—to do otherwise. The goals of these initiatives must be made explicit and progress toward these goals must be implemented rigorously and reported transparently. Such intentionality and openness will allow stakeholders, from the man standing in the grocery store to the government official sitting at his desk, to more effectively evaluate competing options and indeed, the entire idea of using information to change the world. Such information realism may be a painful tonic to drink for these programs, but it is a necessary one.

Given the complexity of these programs and the diversity of perspectives on their most important effects, it is clear from the preceding discussion (p.209) that we are unlikely to find one single goal or metric of performance. Such a lack of a single definition of effectiveness may be frustrating and dissatisfying, but this unfortunately is the result of the irreducible complexity of these initiatives. The information realist acknowledges this complexity, and rather than rail against it as a pessimist would or wave it away as an optimist might, she embraces the challenge of developing a variety of mechanisms and metrics that capture that complexity. The field of participatory multi-attribute decision analysis (MADA) has arisen precisely to help policymakers and stakeholders tackle this challenge.28 Increasingly utilized in environmental policymaking processes, MADA can incorporate a variety of criteria, including input, process, output, and outcome variables, into complex but intelligible evaluations of performance. These evaluations can include direct environmental outcomes as well as consumer, company, and policy outputs and more indirect process, awareness, and knowledge outcomes.

Fortunately, the potential positive effects of these programs are numerous, and every initiative does not have to achieve all of them, at least not initially. They can focus on goals that are most appropriate and pressing for their given context and sector, taking into account the existing regulatory, information, and stakeholder landscapes. They can focus on particular audiences to influence and actions for them to take, rather than trying to change all of them at once. And they can learn from examples of efforts to set goals, measure performance, and report progress by existing initiatives. The next section will highlight a few of these promising practices.

Promising and Problematic Practices

One of the first places to start is clearly and comprehensively reporting to the public any efforts to evaluate an initiative’s effectiveness. A regular and well-written report that documents these efforts is a good idea, although not often found in the EEPAC Dataset. One rare and notable example is Responsible Travel’s annual sustainability report that I mentioned earlier. A twenty-two-page document with an introductory letter from the cofounder and managing director, the report is well designed and well organized, and reports specific numbers where possible and provides qualitative anecdotes where relevant. It includes photos and sections on each of the major areas in which the organization has set goals. Indeed, a related, promising (p.210) practice is explicitly setting specific goals for the program. Climate Savers Computing Initiative provides an excellent example of articulating a specific and ambitious objective for itself: “By 2010, we seek to reduce global CO2 emissions from the operation of computers by 54 million tons per year, equivalent to the annual output of 11 million cars or 10–20 coal-fired power plants. With your help, this effort will lead to a 50 percent reduction in power consumption by computers by 2010, and committed participants could collectively save $5.5 billion in energy costs.”

While such a detailed and comprehensive approach is laudable, it also helps to have a brief summary of the key outcomes of the organization. Rainforest Alliance, Bird Friendly Coffee, WaterSense, Best Workplaces for Commuters, Responsible Shopper, and several other programs all provide detailed summary data on the environmental, company, and/or consumer outcomes of their programs. Some programs, including EPEAT and ENERGY STAR, provide calculators on their website that allow visitors and institutions to calculate the impact of the certification based on their own inputs. For example, the Best Workplaces for Commuters website has a calculator that estimates the financial, employee, environmental, traffic, parking, and tax benefits of enrolling in the program. After visitors input data about their own workplace, a benefits page is produced displaying the reduced urban air pollutant emissions (lbs/year), increased worker productivity ($/year), building cost savings ($/year), and data on twenty other outcomes associated with the certification.29 Similarly, DOE’s Alternative Fuels and Advanced Vehicles website provides a Vehicle Cost Calculator that allows users to compare autos on their annual fuel use, electricity use, operating costs, costs per mile, and annual emissions of greenhouse gases (and provides an interactive graph of this data over time).30

While lacking many specific numbers, the Earth Island Institute’s Dolphin Safe Tuna Program provides a concise list of the “general accomplishments” of its International Monitoring Program, which include preventing deceptive labeling of canned tuna and securing commitments from tuna processors to supply tuna without chasing and netting dolphins. While numerical data that establishes the breadth of a program’s effects is preferable, qualitative information can still be useful to stakeholders, and is certainly better than nothing—as long as it is accurate.

The issue of accuracy brings us to another excellent practice, which is third-party verification of any of a program’s important outcomes. As (p.211) mentioned earlier, the Forest Stewardship Council provides a short summary of independent assessments of its certified forests, many of which were conducted by academic researchers. Michael Kraft, Mark Stephan, and Troy Abel provide one of the most comprehensive and rigorous independent evaluations of an information-based governance initiative in their book on the Toxics Release Inventory (TRI).31 Their work demonstrates the importance of analyzing the distribution of the impacts of these programs; while overall they find that the development of the TRI has led to a substantial decrease in toxic emissions in the United States, these reductions vary significantly by industry, state, community, and individual facilities. Following the logic of chapters 3 and 4, such external verification that pays attention to both micro- and macro-level impact distributions is no guarantee of validity, but it can nonetheless increase the trustworthiness, legitimacy, and credibility of an organization’s claims about its own efficacy.

As I mentioned before, it is important to start somewhere, and such third-party verification may feel out of reach for many initiatives, at least initially. But it is something to aspire to and stakeholders should expect it from the most established initiatives. Likewise, we might not expect a new program to make use of all of the different effectiveness pathways discussed in this chapter. But the most impressive effectiveness claims will nevertheless be those that encompass a wide range of mechanisms and take advantage of these multiple pathways to bringing about change. Thus the fact that FishWise and Responsible Travel at least mention environmental, consumer, corporate, and indirect outcomes on their website is noteworthy, even though the specificity and independence of their claims can be improved. The information realist is seeking initiatives that provide comprehensive, credible, and specific claims of effectiveness, but recognizing the general dearth of such claims, and the obstacles to making them, acknowledges any attempt by information initiatives to evaluate and report on their own efficacy.

Which brings us to problematic practices in this regard, of which there are two primary ones. The first is outright disinformation and fraud—claims of environmental performance when in fact it is lacking entirely. LG’s manipulation of its refrigerators to pass ENERGY STAR tests and more recently, Volkswagen’s manipulation of its cars to pass EPA emission tests are two examples of such practices.32 The second problematic practice is (p.212) simple silence—the fact that 70 percent of the programs do not mention their outcomes or effectiveness is deplorable. It does, however, represent an opportunity for programs to differentiate themselves from this silent majority by highlighting how they are indeed making a difference.

The Effects and Effectiveness of Green Electronics

So in light of this discussion, what is Vernon to do? Fortunately, two of the certifications that are relevant to his decision, EPEAT and ENERGY STAR, happen to be leaders in the area of outcome transparency. As mentioned previously, both programs document the cumulative reduced impacts of their certified products on their websites. Both provide detailed environmental benefit reports, and both provide links to detailed calculators that estimate the benefits of purchasing their certified products (similar to the ones described earlier). Vernon had been stumped by figuring out how to compare and choose between these two options, and it is indeed challenging given that they are both leaders in performance reporting. Both programs provide aggregated results data across all of their certified products, and only limited information about specific product categories, such as the computer monitors that Vernon is analyzing. ENERGY STAR published figure 6.4 in 2012 showing the difference between ENERGY STAR and non-ENERGY STAR certified computers and monitors since 1992.33 Figure 6.5 is an infographic from EPEAT’s website summarizing the electricity and resource material reductions associated with EPEAT-certified computers and monitors.34 As these figures show, they are both impressive, but not very comparable.

The comparison is complicated by the fact that ENERGY STAR certification is a requirement for any level of EPEAT certification.35 Thus from Vernon’s perspective the certifications are equivalent in terms of energy savings at the product level. However, the initiative-wide results for the two certifications may not necessarily be equivalent. While EPEAT reports that 55 million EPEAT-registered computers and displays were sold in the U.S. in 2012, ENERGY STAR data indicates that approximately 65 million computers and displays were shipped with ENERGY STAR certification that same year.36 While EPEAT does not disaggregate their data further on their website, ENERGY STAR reports that its certified products in 2012 included 22 million LCD displays (representing 83 percent market penetration).37 As (p.213)

Being Green: The Effects of the Information

Figure 6.4 ENERGY STAR 2012 benefits infographic.

Note: Adopted from ENERGY STAR, “Product Retrospective: Computers and Monitors.”

Being Green: The Effects of the Information

Figure 6.5 EPEAT 2013 benefits infographic.

Source: Green Electronics Council.

(p.214) of January 2016, EPEAT lists 853 displays registered on its website (201 Silver and 653 Gold), while ENERGY STAR lists 1,565 certified displays (both include professional and signage displays).38 So even though they are certifying energy savings at the same level, as a program, ENERGY STAR is certifying more products and more of these products are being shipped to consumers and organizational purchasers.

However, this is not the end of the story. EPEAT has a significantly broader geographic scope than ENERGY STAR. While it has developed partnerships with a handful of countries and the European Union to promote products that it has certified, ENERGY STAR is primarily focused on the U.S. market.39 In contrast, EPEAT has created product registries in 43 different countries, and in 2012, unit shipments of EPEAT-registered products in countries outside the United States surpassed unit shipments of EPEAT-registered products within the United States for the first time (bringing the worldwide total to 114 million).40 EPEAT also has both required and optional criteria (that enable products to achieve Silver and Gold certification) that cover additional areas that go beyond energy conservation, including:

  • Reduction/elimination of environmentally sensitive materials (e.g., elimination of intentionally added cadmium, mercury, lead, hexavalent chromium, and certain flame retardants and plasticizers, and PVC)

  • Materials selection (e.g., postconsumer recycled plastic content, renewable/bio-based plastic materials content)

  • Design for the end of life (e.g., reusable/recyclable content, elimination of paints or coatings incompatible with recycling or reuse, marking of plastics)

  • Product longevity/life cycle extension (e.g., availability of additional three-year warranty, upgradeability, modularity, availability of replacement parts)

  • End of life management (product take-back service, recycling vendor auditing)

  • Corporate performance (environmental management system, corporate sustainability reporting, environmental policy consistent with the International Standards Organization’s ISO-14001 management standard)

  • Packaging (e.g., reduction/elimination of intentionally added toxics, recyclable packaging, take-back program for packaging)41

(p.215) This list brings us back full circle to chapter 2 and our discussion of values and the value of eco-labels and sustainability ratings. At the end of that chapter, we found that all other things being equal, comprehensiveness is an important criterion by which to evaluate information-based governance strategies. EPEAT has a broader geographic scope than ENERGY STAR, and includes criteria that extend beyond the blended good of energy savings that is the sole focus of ENERGY STAR. This type of good creates both private benefits (in the form of reduced energy costs) and public benefits (in the form of reduced pollution from energy production), while EPEAT is focused on a broader range of more specifically environmental and public-health-related public goods.

EPEAT has quantified these benefits both annually and cumulatively since the inception of the program. For example, it claims that EPEAT-registered products purchased in 2013 will “reduce use of primary materials by 4.5 million metric tons, equivalent to the weight of 14 Empire State Buildings” over their lifetimes.42 If Vernon’s boss counts these nonenergy-related benefits as “making a difference” and is interested in the relative impact per product (as opposed to the cumulative impact of the entire certification program), it appears that their agency should focus on purchasing EPEAT (and preferably EPEAT Gold) displays, particularly since the certification is not associated with a price premium.43 This is perhaps why the second Bush administration made it a requirement for all federal agencies to purchase EPEAT-registered products, and may be the best option for Vernon’s state agency as well.

ENERGY STAR, however, might be able to change this calculus by clearly demonstrating that it excels in important areas that are also important to Vernon and his boss. For example, perhaps they value the complementary impact that certifications can have on public policies discussed earlier in this chapter. While it does not yet make this case explicitly, ENERGY STAR could argue that it is helping raise the regulatory floor of energy efficiency standards for entire product categories. For many electronic appliances, these mandatory standards have been increasing over the past several decades,44 and ENERGY STAR may have served a critical role in demonstrating that it was indeed possible to achieve these standards. This effect on public policy may be the most enduring and important outcome of information-based governance strategies, as it ensures that whole sectors and product categories are more environmentally friendly, rather than only those products (p.216) and companies that are certified or rated. Although a younger program, EPEAT may be able to make a similar case with regard to some of its criteria as well.

Likewise, if TCO begins both documenting and publicizing specific outcomes from its certification program, it might be able to make a strong case to Vernon and his agency. With such documentation, TCO may be particularly compelling if Vernon’s boss has a strong interest in the ergonomic qualities of their computer monitors. Likewise, he may be attracted to it if he wants to certify that his agency’s monitors are manufactured in socially responsible ways that respect the rights of workers and prioritize their safety. These are criteria that neither ENERGY STAR nor EPEAT currently cover.

Unlike TCO, these two programs also share a history of controversy. Both have come under criticism in terms of the trustworthiness and validity of their certifications, for example. In 2010, the Government Accountability Office issued a report asserting that ENERGY STAR is “for the most part a self-certification program vulnerable to fraud and abuse” and documenting how fifteen bogus products, including a gas-powered alarm clock, were able to become ENERGY STAR certified.45 The program responded by instituting a requirement that all ENERGY STAR products must be certified by an independent third-party “certification body.”46 This standard surpasses EPEAT’s verification process, which randomly selects a subset of products from its registry to investigate and confirm that they meet the program’s standards. These unannounced investigations are conducted regularly, and any discovered nonconformance is made public to embarrass the companies involved.47 Nevertheless, it remains a system based on self-declarations and lacks the independence and comprehensiveness of ENERGY STAR’s new third-party testing requirement.

In 2012, questions were indeed raised about the rigorousness with which EPEAT’s standards were being applied when Apple’s Retina MacBook Pro was allowed to keep its Gold certification, even though the certification relied on the product being considered upgradeable with common tools.48 As Kyle Wiens argues, a product with “proprietary screws, glued-in hazardous batteries, non-upgradeable memory and storage, and several large, difficult-to-remove circuit boards would fail all three tests” associated with this criteria.49 EPEAT asked its Product Verification Committee to address this question, and it clarified that upgradability can be satisfied if the product (p.217) contains “an externally-accessible port,” such as a USB port.50 While perhaps technically correct, critics blasted this decision as “eviscerating” the original purpose of the certification and amounting to a “greenwashing” of Apple’s products.51 This controversy raised the possibility that rather than continuously raising the bar for electronics companies, EPEAT’s standards may be lowering it instead.

The point is that EPEAT’s breadth of criteria and reporting of outcomes, while laudable, does not inoculate it from criticism on the other important dimensions of information-based governance that we have discussed in this book. Far from giving them a free pass in these areas, strong claims of effectiveness and value invite the spotlight from consumers, regulators, competitors, and the media. Particularly in the areas of institutional trustworthiness and methodological validity, it is important to pay attention to all components of the information value chain. Weaknesses in that chain raise questions about the validity of any claims made about the outcomes of the program, and create opportunities for other initiatives to surpass them.

The next and final chapter will discuss how different stakeholder groups can take a holistic approach to designing and evaluating information-based environmental governance strategies and overcome the challenges and trade-offs associated with this form of governance. (p.218)


(6.) While some information disclosure may be government mandated, use of that information is still voluntary.

(24.) ENERGY STAR, “ENERGY STAR Unit Shipment and Market Penetration Report Calendar Year 2014 Summary.”