Trusting Green: The Organizations behind the Information
Trusting Green: The Organizations behind the Information
Abstract and Keywords
Chapter 3 begins with a decision scenario involving the Forest Stewardship Council and Sustainable Forestry Initiative. They both provide information about toilet paper greenness, but which of these claims should we trust? The concepts of trustworthiness, accountability, credibility, and legitimacy are introduced to address this question, and then used to analyze the 245 cases of eco-labels and sustainability ratings in the EEPAC Dataset. Accountability relationships to funders, advisors and other actors in the public, private, and civil sectors are analyzed, and the reputational and trustworthiness implications of these relationships are discussed. Signals of credibility, such as expertise and independence, are also identified and discussed. The chapter concludes with a discussion of promising and problematic organizational practices related to organizational trustworthiness, and particularly those that enhance the transparency and clarity of a program’s accountability relationships.
Trusting Toilet Paper
Carrie, as you’ll remember from chapter 1, is an environmental activist pondering how she will respond to a newspaper reporter’s inquiry about environmental certifications of toilet paper. She remembers a campaign from a few years ago led by the environmental organization ForestEthics against the Sustainable Forestry Initiative (SFI) contending that SFI’s claims of “independence” were misleading and deceptive. SFI was created in 1994 by the trade association of the forestry industry, the American Forest and Paper Association, and spun off as a nonprofit organization in 2001 with representatives from several environmental organizations on its board of directors. But ForestEthics documented how these representatives either quickly departed their positions or had strong economic ties to the forestry industry.1 Twenty other environmental organizations also asserted in a letter to SFI that its claims of “being fully independent’ are “false, deceptive, or misleading” because it refused to reveal the sources of its funding.2
Carrie also recalls concerns that were raised about the primary alternative to SFI, the Forest Stewardship Council (FSC). Founded in 1993 by a “group of businesses, environmentalists and community leaders,” the FSC is governed by the FSC General Assembly consisting of three chambers, each representing environmental, social, and economic interests.3 Despite this diverse representation, several environmental organizations have criticized FSC for a host of governance issues. In 2008, Greenpeace issued a report outlining its problems with some of FSC’s practices, Friends of the Earth UK ended their support for the organization, and one of the founders of FSC, Simon Counsel, said that the FSC had become the “Enron of Forestry.”4 Counsel helped create FSC-Watch, which lists conflicts of interest as (p.68) one of the “Ten Worst Things” about FSC: “certifying bodies (assessors) are paid by the companies wanting to get certified.”5 FSC-Watch explains that “it is in the assessors’ interest not to get a reputation for being too ‘difficult,’ otherwise they will not be hired in future.”6 As for direct financial support, FSC lists the eleven organizations that donated over $20,000 in either 2012 or 2013 on its website (six of which are companies that sell paper products), but does not provide an overall breakdown of where their funding comes from.7
These concerns about both FSC and SFI give Carrie, and many others, reason to pause before they endorse either organization, or choose to buy particular products because they have FSC or SFI seals of approval on them. Even if she is convinced that purchasing sustainably produced paper products is an important way for her to express her values, Carrie may be uncertain whether these particular labels are credible and whether the organizations behind them are trustworthy. While many of the issues raised about these two initiatives center on the validity of their specific methods, this chapter focuses on the trustworthiness, accountability, and legitimacy of the organizations behind these initiatives. Methodological validity is a key component of these information value chains and is the focus of chapter 4, but most people do not have the time, expertise, or motivation to systematically analyze and compare their validity. Research discussed in the sections that follow has shown that individuals often rely instead on cognitive shortcuts that focus on the organizations behind these programs to determine whether they will utilize them. Just as the content of these initiatives must be desirable to their audiences, the organizations behind them must also be perceived as trustworthy sources of information.
However, unlike content desirability, I argue in this chapter that organizations do not necessarily need to directly appeal to a broad range of audiences in order to be effective. Instead, they need to send clear signals of credibility that demonstrate their accountability to particular stakeholders, whether they are advocacy organizations, scientific experts, businesses, or particular segments of the public. In order to make this point, the chapter first defines several relevant concepts and weaves them together into a theoretical framework that maps out the pathways by which trust is communicated between these organizations and their audiences. The chapter then presents originally coded data from my Environmental Evaluations of Products and Companies (EEPAC) Dataset on the extent to which (p.69) existing information-based environmental governance strategies are utilizing these communication pathways.8 It then concludes with a discussion of the most promising and problematic trustworthiness communication practices, and also provides a further analysis of Carrie’s forest certification quandary.
Understanding the Nature of Trust
Carrie’s central question is, on the surface, a simple one—does she trust either SFI or FSC? Beneath that surface, however, this straightforward dilemma actually is quite complicated and raises further questions. What does it mean to trust these organizations? How does she know that she can trust them? In order to answer these questions, it is helpful to understand the nature of trust and several other related concepts. While it is easy to view it as relatively commonplace and unremarkable, trust is one of the most important features of human society. A wide range of social scientists have argued that it is a critical form of social capital that has enabled the development of modern social, economic, and political institutions. Francis Fukuyama and other scholars, for example, have argued that countries characterized by a high degree of social trust have been able to create large-scale corporations and capitalist economies more effectively than those with low levels of such trust.9 Following this logic, economics research has consistently found a strong relationship between levels of trust and both income per capita and economic growth.10 Warren Buffet succinctly summarizes this importance of trust: “Trust is like the air we breathe. When it’s present, nobody really notices. But when it’s absent, everybody notices.”11
From Trust and Trustworthiness to Mistrust and Distrust
Given its importance, it is not surprising that trust has been studied by scholars from a wide range of disciplines. Economists, sociologists, political scientists, and psychologists have investigated the dynamics of trust in social, political, and economic contexts and as both a psychological state and a behavioral choice.12 While these scholars often have different understandings of the meaning of trust, it is possible to identify several areas of agreement about the phenomenon. Trust is generally viewed as a relational concept in which one trusting party becomes vulnerable to harm by having (p.70) a positive expectation about the behavior of a second trusted party.13 Such expectations can be generalized to large groups (or even all human beings) or particularized to individual family members, friends, or leaders.14 They may be the result of rational decision-making processes or more unconscious moral intuitions and perceived social norms.15 They may also be based on either an implicit or explicit belief that the trusted party will take into account (or “encapsulate”) the interests of the trusting party and will not intentionally injure that party, if at all possible.16
Regardless of how they are formed, these expectations create the potential for betrayal—the trusted party may not live up to these expectations and will harm the trusting party.17 This point brings us to the concept of trustworthiness, which is a measure of someone’s or something’s likelihood of fulfilling the expectations others have of them. Some scholars distinguish between trustworthiness and competence as two separate dimensions of credibility—the former encompassing characteristics such as kindness, friendliness, and honesty and the latter encompassing attributes such as expertise, ability, and qualification.18 These two dimensions relate to distinctions made by Stephen Marsh (University of Ontario) and Mark Dibben (University of Tasmania) between mistrust and distrust. Mistrust is a measure of misplaced trust—a trusting party made a mistake in placing trust in a trusted party because that party did not fulfill the trusting party’s expectations, either because of a lack of trustworthiness or competence. Distrust, in contrast, is a measure of how much a trusting party believes the (dis) trusted party will actively work against the trusting party’s interests.19 Thus Carrie may mistrust SFI and FSC because she feels they have not competently evaluated forestry operations in the past and are providing incorrect information, or misinformation. Alternatively, she may distrust them because she believes they are beholden to the forestry industry and are undermining efforts to protect forests and biodiversity. She is therefore convinced they are intentionally providing false information to deceive her, or disinformation.20
A perception that the public’s levels of both mistrust and distrust in a wide range of institutions are rising has driven research on trust and trustworthiness over the past several decades.21 The percentage of U.S. citizens who state they can trust the government to do what is right declined from over 70 percent in the mid-1960s to below 50 percent after the mid-1970s.22 Such a decline in perceptions of governmental trustworthiness has been (p.71) attributed to the Vietnam War, Watergate, and the media’s frequent reporting on corruption, scandals, and unsolved social problems.23 Such low levels of trust extend to other institutions as well—the 2015 Edelman Trust Barometer revealed that “trust in government, business, media and NGOs in the general population is below 50 percent in two-thirds of countries, including the U.S., U.K., Germany and Japan.”24 Experimental research has also shown that even though individuals generally have a norm of being trustworthy (and expect that they will be punished if they are not), they do not expect others to be trusting of the people around them.25
The Roles of Accountability, Credibility, and Legitimacy
These declining levels of trust extend to the environmental arena and make information-based governance initiatives across all policy areas more challenging. As a result, scholars and practitioners alike have shown increased interest in the related concepts of accountability, credibility, and legitimacy. As generalized trust in institutions has declined, demands for stricter tracking of their accountability have increased. As London School of Economics legal scholar Julia Black explains, accountability is a type of relationship “between different actors in which one gives account and another has the power or authority to impose consequences.”26 These accounts can enable actors to overcome the lack of trust between them,27 and can include descriptions of methodological processes (the subject of chapter 4), reports on performance outcomes (the subject of chapter 6), or signals of organizational credibility (the focus of this chapter). These credibility signals can emphasize an actor’s own characteristics—particularly independence and lack of conflicts of interest—that directly communicate the actor’s trustworthiness or expertise. Or these signals can focus on an actor’s organizational associations that indirectly lend the actor an aura of credibility. Thus, even though Carrie may currently have low levels of trust in FSC or SFI, she can look for any of these signals from them that might increase her sense of their credibility.
Credibility has been defined as “the quality or power of inspiring belief,”28 or as believability or authoritativeness.29 This definition is instructive because it suggests that accepting a claim’s credibility is to “take it on faith,” even absent more tangible and direct evidence of actual outcomes. As many scholars have pointed out, credibility is also a relational concept, and must be understood in relation to the perceptions of relevant stakeholders.30 (p.72) Thus some characteristics may be more credible to some stakeholder groups than others. This raises the important possibility that agents are strategically sending particular signals in order to attract the support of specific stakeholders. Given this potential dynamic, it is important to understand how perceptions of legitimacy drive stakeholders’ responsiveness to these signals of credibility.
Building on Max Weber’s original conception, Brown University sociology professor Mark Suchman defines legitimacy as the belief that “the actions of an entity are desirable, proper, or appropriate,”31 while Cornell University government professor Norman Uphoff describes how legitimacy is granted to individuals or organizations “in keeping with the beliefs people have about what is right and proper.”32 Legitimacy theory suggests that organizations depend on legitimacy for their survival and will use strategies such as information disclosure to ensure its continued supply, while stakeholder theory further suggests that organizations will disclose information that is salient to stakeholders they perceive as particularly important sources of legitimacy.33 Such disclosures can earn an organization several different types of legitimacy. Information that contributes to the self-interest of stakeholders can enhance an organization’s pragmatic legitimacy, while information that enhances the welfare of society can enhance its moral legitimacy. Likewise, information that encourages stakeholders to view an organization as a natural and inevitable part of their lives can enhance its cognitive legitimacy.34
Stakeholders may evaluate these forms of legitimacy in terms of either an organization’s actions and “outputs” or its essence and “inputs.”35 Output legitimacy, or “rule effectiveness,” is the extent to which initiatives “effectively solve the issues that they target,” and requires comprehensive coverage of the relevant actors, strong rule efficacy, and effective enforcement.36 Such outputs, which are the subject of chapter 6, can be difficult to systematically quantify,37 and so audiences may instead focus on the input legitimacy of green claims and whether the process by which the claims were generated is perceived as justified.38 This form of legitimacy derives from a concern in democratic theory that “political choices should be derived, directly or indirectly, from the authentic preferences of citizens.”39 From this perspective, process matters as much or more than outcomes. Sébastien Mena and Guido Palazzo, business school professors at the City University of London and University of Lausanne, respectively, suggest that input (p.73) legitimacy requires stakeholder inclusion, procedural fairness of deliberations, promotion of a consensual orientation, and transparency of an organization’s structures and processes.40 In essence, both who is involved and how they are involved are relevant to determining the input legitimacy of an organization or initiative.
The different signals of credibility discussed previously can help organizations earn these various types of legitimacy. For example, organizations can gain pragmatic and moral legitimacy if stakeholders view them as trustworthy and competent enough to deliver information that is relevant to either themselves or society at large. Likewise, they can gain cognitive legitimacy if stakeholders sense their traits of trustworthiness and competence are culturally appropriate and perceived as “predictable, meaningful, and inviting.”41 These traits generally help agents earn input legitimacy; trustworthiness and expertise are all traits that stakeholders may value as important inputs in the process of developing sustainability information. The one exception is transparency about outcomes, which, as we explore further in chapter 6, can also earn agents output legitimacy.
These grants of legitimacy are often coupled with a transfer of resources to the organization. These resources may be either tangible or intangible, and can include grants of authority, influence, information, economic resources, or social prestige (the sections that follow provide a more detailed description of these different types of resources). If the organization fails to continue to send relevant signals of credibility, does not demonstrate its accountability to the stakeholder providing these grants, or otherwise undermines the stakeholder’s perceptions of its legitimacy, trustworthiness, or competence, then that stakeholder may discontinue these grants of resources. Stakeholders, however, may disagree over which of these signals of credibility are most important, and may be willing to grant legitimacy for some traits more than others. In this case, which stakeholders and which signals do evaluation organizations and firms prioritize in their pursuit of legitimacy? Do they focus more on signals of trustworthiness or competence, for example?
The rest of this chapter addresses these questions in the context of information-based environmental governance strategies. Using the EEPAC Dataset of 245 cases of environmental certifications and ratings discussed in earlier chapters, it explores not only what signals of credibility these initiatives are sending via their websites, but also what grants of legitimacy they (p.74) are advertising as additional reasons to trust them. These grants of legitimacy are manifested by a transfer of resources from a range of public, private, and civil society organizations, and also represent a signal of credibility and “trustworthiness by association” in their own right. Figure 3.1 provides a graphical depiction of the flow of credibility signals and legitimacy grants between stakeholders (the trusting parties) and the organizations seeking their trust (the trusted parties—firms and evaluation organizations making sustainability claims). The signals of validity and effectiveness that it depicts are discussed in chapters 4 and 6, respectively.
Green Trust Deficits and Opportunities
The trustworthiness communication pathways shown in figure 3.1 are particularly important for eco-labels and sustainability ratings. The information
(p.75) that these initiatives provide is usually a “credence good,” which requires trust in its quality even after its use because it is difficult to know how accurate it is.42 This partially explains why 56 percent of Americans do not trust companies’ green claims.43 These distrusting consumers may suspect that such claims are not authentic examples of improved environmental performance, but rather are the result of efforts to deflect environmental criticisms, superficially jump on the green bandwagon, earn revenue from products marketed as green, or get credit for minimal regulatory compliance (rather than going above and beyond legal requirements).44 Whether these efforts are the result of market, organizational, or individual psychological drivers,45 consumer concerns about them can culminate in a belief that increased prices associated with eco-labels are not due to legitimate differences in production costs. Thus more than 50 percent of American shoppers, for example, believe organic food is too expensive and organic certification is “an excuse to charge more.”46
These beliefs likely stem from both a specific distrust in the organizations behind these sustainability claims or from more generalized social mistrust. On the one hand, Aarhus University professor of economic psychology John Thøgersen and his colleagues found that Danish consumers, for example, are less likely to consider the Marine Stewardship Council seafood eco-label in their purchases if they have relatively low levels of trust in World Wildlife Fund (WWF), one of the organizations behind the certification.47 On the other hand, in a survey of citizens across eighteen European countries, University of Konstanz professor of corporate social responsibility Sebastian Koos found that participants who generally consider other people to be trustworthy are more willing to purchase products with eco-labels. Koos concludes that people living in countries with relatively low levels of such generalized trust may be particularly suspicious of green claims. In 2008, the United States ranked as the tenth least-trusting industrialized country, suggesting that environmental certification programs face relatively high levels of distrust among Americans.48
Nevertheless, some types of distrust and mistrust can paradoxically lead to greater trust in these information-based governance initiatives. Ken Peattie, a professor of marketing and strategy at Cardiff Business School, asserts that green labels can address the loss of trust among consumers due to media coverage of greenwashing controversies by providing consumers with reliable information about product ingredients, production methods, (p.76) in-use resource efficiency, and their lifespans.49 Research by University of Nantes scholar Dorothée Brécard and her colleagues shows that people who do not trust that governments are adequately protecting fisheries are more likely to buy eco-labeled seafood products. Their mistrust and/or distrust of fishery regulations is apparently greater than their mistrust and/or distrust of seafood eco-labels.50
Thus trust can vary by sector and organization, and not all sources of information are uniformly perceived as untrustworthy. Over two-thirds of Americans report that information provided by word-of-mouth discussions, the news media, food retailers, and food companies helps them learn how food companies promote human and environmental well-being and the safety of food sources.51 Sustainability experts generally trust nongovernmental organizations (NGOs) more than governments to evaluate a company’s sustainability performance, which parallels the higher levels of trust that NGOs enjoy over both national governments and global companies among the general public, both in the United States and abroad (an Edelman survey, for example, shows that NGOs are the most trusted institution in twenty-three of twenty-six countries).52 Mario Teisl, an economist at the University of Maine, confirmed that consumers have a strong positive bias toward eco-labels provided by NGOs by experimentally comparing their evaluations of labels provided by four different types of organizations. The label attributed to the Sierra Club garnered the highest ratings of environmental friendliness and satisfaction, among labels attributed to the Forest Stewardship Council, the EPA, and a fictional Maine Wood Products Association.53
However, other studies have shown that government involvement can significantly enhance consumer acceptance and the outcome effectiveness of environmental certifications. In a review of five energy labels, for example, Abhijit Banerjee (MIT) and Barry Solomon (Michigan Technological University) conclude that “government support proved to be crucial in determining a program’s credibility, financial stability, and longterm viability.”54 And Kim Mannemar Sønderskov and Carsten Daugbjerg, political scientists at Aarhus University and Australian National University, respectively, find that consumer confidence that products marketed as organic are indeed organic is generally stronger in countries (such as Denmark) where the government plays a strong role in the organic certification process.55
(p.77) These findings suggest that different audiences are making different evaluations of the trustworthiness of the organizations behind environmental evaluations of companies and products. The reputations of these organizations undoubtedly play a critical role in how stakeholders perceive different eco-labels and ratings, but those perceptions may also be influenced by other signals of credibility broadcast by these initiatives. This is particularly true of online initiatives. As Michigan State accounting and information systems professor Harrison McKnight and his colleagues find, website trustworthiness and website quality (which we will to return in chapter 5) are both important determinants of consumer trust.56 Other scholars suggest more specific criteria for evaluating label legitimacy, including stakeholder inclusivity (along the entire supply chain), independence, expertise, discursive quality, democratic control, and transparency (including auditability).57
Several surveys suggest that independence, transparency, and expertise are particularly important criteria for both consumers and sustainability professionals. For example, I conducted an online survey that asked 428 consumers to identify their most preferred characteristics of eco-labels.58 From a set of thirty-two attributes that included affiliations with specific types of organizations (media, corporate, nonprofit, government, and academic) and specific content areas, independence and transparency were the two most preferred characteristics of eco-labels. The inclusion of energy/climate change criteria and expertise were the third and fourth most preferred characteristics. Similarly, a survey of more than a thousand sustainability professionals found that the three most important factors for this audience, in order of importance, were objectivity/credibility of the data sources, disclosure of methodology, and experience and size of the research team.59 These three top factors map well to the dimensions of transparency (disclosure), independence (objectivity), and expertise (research team experience) identified in my consumer study.60
Signals of Credibility
While other characteristics may also influence stakeholder perceptions of these programs, these results suggest that transparency, independence, and expertise are among the most likely to effectively serve as specific signals of credibility for these initiatives for a broad range of audiences, from (p.78) sustainability experts and professionals to the public at large. These three characteristics may overlap and complement one another (e.g., experts can be independent and transparent), but nevertheless represent distinct and independent characteristics that initiatives can choose to signal to their audiences. While I discuss transparency in chapter 4 in the context of methodological validity and replicability, the sections that follow describe the nature of expertise and independence as signals of credibility. They then present empirical data showing the extent to which the 245 cases in the EEPAC Dataset are sending these signals to their respective audiences.
Independence: Signaling Distance
Independence of the assessment organization and its lack of conflicts of interest is perhaps the most commonly mentioned proxy for trustworthiness in the literature on sustainability claims.61 INSEAD professor of ethics and social responsibility Craig Smith and his colleagues argue that a claim’s credibility is particularly undermined “where consumers perceive firm-serving motivations rather than motivations to serve the public good.”62 Anita Jose, a management professor at Hood College, and Shang-Mei Lee, a finance professor at St. Edward’s University, find that “companies are using third party external audits to establish the credibility of their commitment to environmental management practices.”63 The underlying logic is that companies should not be the principals for independent assessments. In other words, the more objective and distant the source of an assessment is from the source of the product, the better. The assumption is that signals of credibility either sent directly by firms or by evaluation organizations associated with those firms are inherently unreliable.
Independence maps well to scholars’ definitions of trustworthiness as a form of safety.64 People may be more likely to trust and feel “safe” using information coming from third parties that have fewer conflicts of interest. If they are professional certification organizations, academic institutions, or government agencies, they may be perceived as more “fair” and “calm.” If the third parties are nonprofit organizations, they may also be perceived as more “altruistic” and “kind.”65 Policies promoting independent data verification or generation by third parties may also express a normative belief in the value of civil society organizations as advocates of the public’s interests. Likewise, they also imply that the critical locus of power and accountability (p.79) should be with these organizations because of their public orientations, watchdog status, and focus on social welfare. In this sense, agents emphasizing their independence may be recruiting grants of moral input legitimacy from principals who value the role these intermediary organizations play in society. Such an approach is justified by surveys that consistently find that nongovernmental organizations are society’s most trusted institutions, both by the public in general terms and by sustainability professionals as evaluators of corporate sustainability performance.66
Despite its importance, most studies of independence have ignored the multiple dimensions of the concept. The first of these dimensions is the type of independence—has the data been generated by independent organizations or only verified by such organizations? Independent generation implies full control of the data from collection to analysis to delivery, while verification indicates external monitoring of a self-assessment process that has higher potential for fraud. For one of its investigations, Greenpeace, for example, sent computers to an independent lab to be analyzed for toxic chemicals, rather than rely on company reports.67 A second important dimension is the source of the independence—is the data generation or verification performed by the evaluation organization or firm itself, or is it conducted by an organization that has been accredited or contracted by a third organization? Increasingly, “third party” certification systems are assigning the strategic roles of standard-setting and administration and the operational roles of monitoring and assessment to separate organizations. A third dimension is the level of independence—is all of the data independently generated or verified, or only some of it?
Three sample text segments provide examples of each of these different characteristics and demonstrate how they were coded. The website of the EPA’s WaterSense program states, “All products bearing the WaterSense label must be tested and certified by an approved third party laboratory to ensure they meet EPA water efficiency and performance criteria.”68 This is an example of a text segment that was coded as all data (“all products … must be tested”), independent generation (external labs, not EPA or the firms themselves, are conducting the tests), and contracted/accredited organization (“an approved third party laboratory”). As a second example, the website of B-Corp states, “When a company becomes Certified they must submit documentation for approximately 20 percent of their answers to the B (p.80) Survey … 10 percent of B Corporations are audited every year … [by B Lab auditors].” This text segment was coded as some data (only 10 percent are audited), independent verification (data is submitted by the company), and evaluation organization (B Lab auditors). In cases where the source or type of independence was unclear, such as the phrase “third-party, independent validation and verification” found on Rainforest Alliance’s website, the text segment was coded as evaluation organization and independent verification by default.
Almost 40 percent of the cases in my EEPAC Dataset verified or generated at least some of their data. Slightly over 14 percent of the cases generated their own data independently of the organizations being evaluated, and slightly over 33 percent had mechanisms in place to verify the accuracy of the data they received from the organizations they were evaluating. Almost 30 percent of the cases verified or generated all of their data, and nearly 10 percent verified or generated some of their data. Approximately 28 percent of the cases have other organizations generate or verify their data, while just under 18 percent generate or verify their information themselves. Figure 3.2 presents a more granular view of these data. The proportion of cases that use independently verified or generated data was not significantly different for cases implemented by firms than for cases implemented by evaluation organizations.69
(p.81) An additional dimension of independence is the type of peer review, if any, that is used in the evaluation process. Both the methods used in the evaluation and the data collected can be peer reviewed, and the review can be conducted by individuals with varying levels of expertise who work inside or outside the firm or evaluation organization. An example of data peer review comes from the Rainforest Alliance, which states that a team of trained specialists writes an assessment report of a farm or forest that has applied for certification, and this report is then “evaluated by an independent, voluntary committee of outside experts (i.e. peer reviewed).” An example of method peer review comes from Protected Harvest, which states that its “standards are peer-reviewed by the scientific community and then must be approved by the distinguished environmentalists on the Protected Harvest board.” Approximately 5 percent of programs mention peer review processes for their methods, and 4 percent mention peer review processes for their data.70 Less than 2 percent of the cases specified the expertise of the individuals conducting the peer review process. For example, one text segment states that “BASF’s eco-efficiency was carefully examined and evaluated by David R. Shonnard, PhD, an independent expert in green engineering,” and goes on to describe his academic credentials.
Expertise: Signaling Knowledge
Expertise has also been cited as an important aspect of legitimacy,71 which is not only an evaluation of particular decisions but also the suitability of those who make those decisions.72 Thus the people implementing these initiatives may have varying levels of knowledge that make them more or less qualified to determine the sustainability of a particular product or company. Assurance statements for corporate social responsibility reports therefore often provide “commentary from high profile experts deemed trustworthy by the public.”73 In some cases, regulatory agencies may even delegate policy-making authority to private agents because of their preexisting specialized expertise in particularly complex and technical issue areas.74 There is a rich literature on the subject of expertise, and it discusses the phenomenon both generally as well as in the specific context of environmental politics.75 One important distinction that this literature reveals is the difference between expertise from academic training (“book learning”) and expertise from professional experience (“learning by doing”).
(p.82) Expertise is one of the core dimensions of academic typologies of credibility, and is a primary reason why the public might accept an organization as legitimate in the absence of more direct evidence of output legitimacy.76 An emphasis on the expertise behind an assessment process may represent a commitment to scientific knowledge as the best way to ensure the validity of an evaluation (and dealing with sustainability challenges more generally). From this perspective, it is the scientists and experts who should be trusted to solve society’s environmental problems and evaluate claims of greenness. Following this logic, organizations that hire experts with relevant expertise are more likely to produce valid environmental assessments.
An emphasis on expertise may also represent a normative commitment to the rigorous pursuit of truth as a fundamentally important social value. It may also signify an attempt to activate a sense of cognitive input legitimacy; like technical evaluations in other domains, assessments of sustainability should naturally be conducted by experts with relevant technical knowledge, and to think otherwise is “unthinkable.”77 Such a dynamic would explain why over a thousand sustainability experts rated the experience and size of the research team as one of the three most important factors in determining the credibility of a corporate sustainability rating.78
Expertise can be produced through academic training or from professional experience. Academic training can be further categorized as general training or training that is directly relevant to the organization’s work. In order to capture these dimensions of expertise, text segments were coded as general academic training, relevant academic training, and relevant professional experience. As an example of general academic training, the CarbonNeutral website states that its executive vice president “holds an MBA with Distinction from the Stern School of Business at NYU and a Bachelor’s degree in Psychology from UCLA.” The Bird Friendly Coffee website states that the director of the organization behind the certification has a PhD in ornithology from the University of California, Berkeley, which is an example of relevant academic training. The website of the 100 Best Corporate Citizens provides an example of relevant professional experience; its director of research is described as having “more than a dozen years of experience supporting institutional investors with research and software tools for values-based investing and proxy voting.”
(p.83) The coding data indicates that nearly one out of five cases (18 percent) claim that at least one staff member working on the initiative has relevant professional background and expertise (i.e., substantive, full-time past work on environmental or social issues). Slightly over 10 percent claim to have staff with academic training (master’s degree or above) that is relevant to environmental or social issues, while approximately 7 percent claim to have staff with academic training (master’s degree or above) that does not have a clear relationship to the work of the initiative (see figure 3.3). While approximately 25 percent of the cases implemented by evaluation organizations make at least one of these claims of expertise, none of the initiatives implemented by firms make any claims of expertise. Firms therefore are significantly less likely to signal their expertise than evaluation organizations.79
The Landscape of Credibility Signals
As in chapter 2, we can combine all of this information about credibility signals into a single representation of the landscape of credibility signals that these cases are sending to their audiences. As figure 3.4 shows, a major proportion of the initiatives (the 119 in the upper left-hand corner of the figure) make no claims of either expertise or independence. The eight cases in the bottom right-hand corner mention at least some level of expertise
and claim to generate all of their data. These initiatives are Rainforest Alliance Certified, Bird Friendly Coffee, Certified Best Aquaculture Practices, Certified Naturally Grown, Certified Compostable Products, Design for the Environment, AHRI Certified, and GreenGuard.
Grants of Legitimacy
The signals of credibility discussed above may be perceived positively by different stakeholder groups, who in turn may be willing to endorse the organization sending the right signals. As discussed earlier, such recognition is a “grant of legitimacy,” and can come in many different forms. Such grants are mechanisms not only to express support for a program, but also to exert control over it. Either way, they are a useful signal themselves to other stakeholders regarding the allegiances and accountability of different information-based governance initiatives. The power resource framework developed by Warren Ilchman and Norman Uphoff at UC Berkeley provides a useful approach for identifying the key mechanisms, or “resources,” (p.85) that organizations and individuals use to exert power over these initiatives, which in turn become either positive or negative signals of credibility for other organizations and individuals.80 These resources include funding, social status, authority, and information as the primary resources of power.81 Understanding how these power resources are distributed can reveal “who claims authority over whom, and on what issues” and “who accords legitimacy to whom, on what grounds, and with what limitations.”82 More specifically, it can help explain why ratings and labels are designed the way they are and who is driving and endorsing those design decisions.
The actors who may be behind these initiatives can be divided into three general categories—organizations and individuals from the public, private, and civil sectors. The public sector comprises all state-owned institutions, including government agencies and nationalized industries.83 The private sector “encompasses all for-profit businesses that are not owned or operated by the government.”84 The civil sector, or civil society, is the “sphere of institutions, organizations, and individuals located between the family, the state, and the market in which people associate voluntarily to advance common interests,” and include both nonprofit and academic institutions.85 In order to identify the extent to which the private, public, and civil sectors are using the different power resources to exert power over these cases, I coded the websites of my 245 cases for different ways in which organizations from these different sectors are involved in the cases.
For each type of involvement (e.g., funding), cases with either nonprofit or academic codes (and no other organization codes) were coded as civil sector, while cases with either retailer or supplier codes (and no others) were coded as private sector.86 Cases with only government codes were coded as public sector. I also created codes for mixed sector involvement—public-private, private-civil, public-civil, and public-private-civil. Each type of involvement maps to the different power resources just described, and are explained in the corresponding sections that follow.
Grants of Authority
The most obvious way an organization can exert power over an initiative is to lead it—to be the primary institution that has the authority to make its day-to-day operational and strategic decisions. Such authority can direct an initiative to focus on certain issues and methods while ignoring others, which can benefit certain actors while disadvantaging others. As panel A (p.86) in figure 3.5 illustrates, initiatives that describe themselves as being implemented solely by civil sector organizations are the most common type of initiative in the dataset (33 percent). These include advocacy organizations such as Environmental Defense, certification organizations such as the Forest Stewardship Council, media organizations such as the National Geographic Society, rating organizations such as the Carbon Disclosure Project, research institutions such as the Aspen Institute, and academic institutions such as Claremont McKenna College.
Cases led solely by private sector organizations (23 percent) are the second most common type of case. Companies leading these initiatives include HP, Amazon.com, Whole Foods, and Staples. Retailers account for 72 percent of the cases led by the private sector, while 28 percent are led by suppliers (i.e., manufacturers of products being evaluated). Initiatives led solely by public sector organizations account for 6 percent of the cases, and include programs such as ENERGY STAR, Design for the Environment, and Certified Organic. Only seven cases are led by more than one sector. These include nonprofit organizations, such as the Business and Institutional Furniture Manufacturers Association, that serve as business associations for suppliers of the products being evaluated. They also include collaborations between civil and private sector organizations, such as the Climate Savers Computing Initiative. The type of implementation organization could not be identified for over a third of the cases.
These results provide a valuable snapshot of the types of organizations that are implementing these initiatives. However, authority over information-based governance strategies can be wielded in ways beyond their direct implementation. An initiative’s leader may delegate or share its authority with other organizations through partnerships and coalitions, associations via advisory boards and boards of directors, and direct involvement in the design of the initiative. Such indirect authority can be used to recommend certain approaches that would cast either a positive or negative light on organizations being evaluated. Each of these three types of authority sharing were coded and combined into an aggregate metric of “organizational association” by each sector.
Panel B of figure 3.5 shows that associations with either civil (12 percent) or private (13 percent) sector organizations are most commonly mentioned on the websites, followed by associations with both private and civil organizations (9 percent). Initiatives only mentioning associations with civil (p.87)
(p.88) sector organizations include FishWise and Citizens Market, while those only mentioning associations with private sector organizations include the Corporate Responsibility Index and the Green Hotels certification. An example of an initiative with associations with both civil and private firms is the Forest Stewardship Council. Over 50 percent of the cases do not mention any such associations.
Grants of Economic Resources
Organizations can also exert power over these initiatives by funding them. Funds can come with explicit or implicit strings attached that require an initiative to use a particular method or set of criteria that would benefit the funder. As panel C in figure 3.5 shows, cases that only mention financing from private sector organizations are the most common in the dataset (8 percent), and suppliers account for 80 percent of those organizations. One case, the Best Aquaculture Practices certification, mentions financial (p.89) support from both suppliers (ten “visionary industry leaders”) and retailers (Darden Restaurants). Organizations mentioning funding from only the civil sector (4 percent) and from all three sectors (3 percent) are the next most common. An example of an organization only receiving civil sector funding is the Electronic Takeback Coalition’s TV Companies Report Card, and an example of an organization receiving funding from all three sectors is the Marine Stewardship Council. Approximately 7 percent of the cases receive funding from more than one sector. Notably, 79 percent of the cases do not provide any information about their funding sources.
Grants of Information
The information used by information-based governance is a power resource itself, and can serve as a mechanism by which the sources of that information can exert power over these strategies. For example, initiatives that rely on data provided by companies are limited to what information those companies provide to them. Companies can provide false or misleading data that obscures their true environmental performance. As panel D in figure 3.5 shows, cases that only mention private sector organizations as their source of data are the most common in the dataset (13 percent), and all of these organization are suppliers (as opposed to retailers). Examples of cases that only mention the use of private sector data are ENERGY STAR, Climate Counts, and the Chemical Home. Cases that only mention the public sector as its source of data are the second most common type of case. Examples of these cases include the Auto Asthma Index and FishWise. Cumulatively, 15 percent of the cases use data from more than one sector. Across all cases (both those that use data from one type of sector and multiple sectors), the number of cases that use data from public and private sector sources is statistically equivalent (22–23 percent).
Grants of Prestige
Organizations can also exert power over information-based initiatives by recognizing and granting prestige to them (or by withholding such recognition). Prestige can be transferred explicitly by endorsement or implicitly by their use of the initiative’s information. For example, Green Home states that it has “received endorsements from throughout the environmental community, including Environmental Defense and The Earth Charter,” while EPEAT lists organizations that have instituted an EPEAT certification (p.90) purchasing requirement, including the United States Marine Corps, the City of San Francisco, and Yale University. Similar to funding, such endorsements can come with a quid pro quo—to earn the endorsement, an initiative must commit to a certain approach that would benefit the endorser. Panel E in figure 3.5 reveals that cases that only mention such recognition from the private sector are the most common in the dataset (10 percent), with retailers being mentioned in 62 percent of those cases and suppliers being mentioned in 50 percent of them. Some examples of cases that only mention endorsements or use by the private sector are Cradle to Cradle certification, Dolphin Safe, and the Best 50 Corporate Citizens. Approximately 5 percent of the cases mention endorsements or use by organizations from more than one sector. Nearly 80 percent of the cases do not provide any information about endorsements or use by government, nonprofit, academic, retailers, or suppliers.
The Landscape of Legitimacy Grants
The preceding sections document how four different resources—authority, economic resources, information, and status—are distributed across a dataset of 245 information-based environmental governance initiatives. These resources are grants of legitimacy that can serve both as mechanisms of control over these initiatives and as signals of credibility (or the lack thereof) to other organizations. Figure 3.6 aggregates this data into a single snapshot of these grants of legitimacy by the public, private, and public sectors across the 245 cases. Approximately 25 percent (the sixty-two cases in the bottom right-hand corner) of these initiatives mention at least one resource from each of the three sectors, suggesting they have been deemed legitimate by at least one organization within those sectors. An additional 32 percent mention at least one resource from two of the three sectors, and another 33 percent mention at least one resource from one of the three sectors. The twenty-nine cases in the back top left-hand corner mention no resources from any of the three sectors.
The Information Realist Perspective
As in chapter 2, observers who are optimistic about information-based governance strategies will likely interpret these results in a positive light, or at least as a glass half full. These cases are clearly sending a wide range (p.91)
of credibility signals based on both their own particular attributes—their independence and expertise—and grants of legitimacy from other organizations. The fact that more than half of the initiatives clearly describe their criteria and more than a third provide detailed and complete descriptions of their methods may be particularly encouraging for these information optimists. While they would like to see more than 40 percent verify or generate at least some of their data and more than 25 percent claim to have some type of expertise, these are still significant proportions of the dataset. They might also point out that just because they do not describe their expertise or their independence does not necessarily mean that they have none.
This is true for the data on the grants of legitimacy as well, which these information optimists will likely view as also encouraging. More than half of the cases signal that they have received at least some type of support from at least two sectors of society, and only 12 percent mention no such support from any sector. A wide range of institutions are supporting these initiatives, suggesting that they do indeed trust them. To the extent they do (p.92) not explicitly trust them, the deployment of different forms of power over these programs—funding, authority, and so on—indicates these institutions have created mechanisms to keep the initiatives accountable to them. For those few that have not received many (or any) such grants of legitimacy, stakeholders can easily identify and avoid these programs, especially if their other signals of credibility are also inadequate. While these information proponents would like to see more such signals, they are confident that those that do send them are (and will continue to be) rewarded, and the collective trustworthiness of these programs will continue to be ratcheted up by this process.
Meanwhile, the information pessimists likely view this idea as incredibly naïve, and see this data quite differently. To these observers, the landscape of credibility signals in figure 3.4 looks particularly barren, with nearly half of the cases congregating in the zone of no expertise or independence. The paucity of cases that generate their own data is particularly concerning—independent verification, which relies on companies self-reporting the vast majority of their data, is a poor substitute for evaluations that are conducted truly independently. The lack of expertise (75 percent of the cases do not mention any at all) is also alarming. Not knowing the backgrounds of the people doing these assessments, how can we possibly believe they know what they are doing?
Perhaps an organization that you trust has endorsed or is running the initiative, and that is good enough for you. But the data on such endorsements would likely raise additional concerns for most information pessimists. First of all, it is not at all clear who is behind most of these initiatives. One-third of the cases do not provide any information about the type of organization behind them, over half do not provide any information about the organizations that are advising or partnering with them, and nearly two-thirds do not provide any information about their data sources. Approximately 80 percent do not mention any of their funding sources or any organizations that have endorsed them or used their services, while 12 percent do not disclose any information about any of the four resources of power discussed in this chapter. Only two cases (less than 1 percent), Fish-Wise and EPEAT, provide information about all four resources.
This opacity of the power and accountability relationships driving these initiatives may cause many observers to hesitate in trusting them. However, even the relationships that are documented will likely raise concerns (p.93) for information pessimists because they often create what I call “power hybrids.”87 These power hybrids are cases in which power associated with a particular resource is shared between two or more sectors, and dominate the dataset described previously. Approximately 35 percent of the cases are “intra-resource hybrids”—multiple sectors provide them with the same power resource (e.g., receive funding from the private and public sectors). An additional 22 percent are “inter-resource hybrids”—multiple sectors provide them with different power resources (e.g., they receive funding from the private sector and data from the public sector). Information and indirect authority are the most common types of resources to be provided by multiple sectors (15 percent and 20 percent of the cases, respectively). Across all power resources, cases that use power resources from both private and civil sector organizations are the most common type of power hybrid (14 percent of cases). Tri-sector power sharing of a power resource occurs in 12 percent of cases.
These power hybrids have intricate and complex relationships with private, public, and civil society actors, and the information optimists might argue that these connections enable them to reach wider audiences and be more effective. But the information pessimists might in turn argue that they instead create significant conflicts of interests and cross pressures of accountability, which can result in what has been called “multiple accountability disorder” and the “problem of multiple masters.”88 These conflicts of interest can undermine an initiative’s perceived legitimacy because they suggest conflicting accountability relationships, which can be “a critical element in the construction and contestation of legitimacy claims.”89 With so many “masters,” the master that a particular stakeholder trusts may have little or no actual power in these hybrid organizations, and thus its involvement with these organizations may not be perceived as increasing their legitimacy.
Information realism recognizes these concerns about the opacity and hybridity of power in these programs, but also acknowledges that some initiatives are nevertheless doing a better job than others in communicating their trustworthiness to their audiences. For the information realist, the landscapes of credibility signals and legitimacy grants in figures 3.4 and 3.6 reveal both the challenges and opportunities of information-based governance. Unlike the information optimist, the information realist does not see an inevitable march toward progress and greater trustworthiness in (p.94) this data. Nor does she view it as barren and hopeless as the information pessimist does. Instead, she sees an ecosystem of niches that programs can populate. Obviously the figures presented are simplifications of this ecosystem—many more possible constellations of expertise, independence, and organizational support are possible. Rather than simply seeing more signals of credibility and more grants of legitimacy as necessarily better, she views strategic and purposeful articulations of credibility and legitimacy as the goal for these programs.
Thus the opacity and hybridity of power discussed here is problematic if it creates confusion among stakeholders, but are not problems in their own right. Some initiatives inevitably will be opaque about some of their attributes, intentionally or otherwise, and if their audiences are aware of and can evaluate that opaqueness, then it is not a problem. Instead, it represents another way to assess their trustworthiness. Similarly, some initiatives will inevitably have conflicts of interest created by their associations with different types of organizations. If stakeholders are aware of these conflicts and how these organizations claim to manage them, then they have another data point they can use to evaluate them. Thus the management of hybridity, as opposed to hybridity itself, becomes the issue to focus on. While this perspective requires stakeholders to be thoughtful about whom they do and do not trust, it also recognizes that they likely trust organizations for different reasons. It also requires evaluation organizations and firms to be intentional about how they signal their credibility, while recognizing there is no one-size-fits-all approach to communicating trustworthiness.
The real challenge that information realism presents is building the capacity of both trustees and trustors to create trusting relationships. Fundamental to such relationships are clear signals of credibility, legitimacy and accountability, and the clarity of these signals depend on well-defined standards for different dimensions of credibility. For example, precise standards for different types of independent data verification and generation—including those that require civil, discursive, and/or consensual engagement with stakeholders—should be developed.90 Such standards would clarify the confusion around the term “third party,” and create clear categories of independence that differentiate between firm-paid vs. nonfirm-paid, governmental vs. nonprofit monitoring, and standard-setting vs. standard-checking.91 Standards for relevant professional and academic expertise would be useful (p.95) as well, particularly given the relatively low prevalence of expertise signals in the sample. The U.S. Green Building Council’s LEED-related credentials for architects is one example of such a standard, and could be replicated for professionals involved in the design and operation of eco-labels and other forms of environmental evaluation.92 Such standards would enable easier comparison among expertise claims and highlight the importance of having relevant skills. They would also enable true domain experts to credibly differentiate themselves from advocates, executives, and academics who do not possess those skills.
Audiences, however, do not magically learn about these standards or whether initiatives meet them, and so a second foundation of trusting relationships is a forum for learning and communication. In other words, in order for the market-based, ratcheting-up process that information optimists promote to work, an “information marketplace” is needed that enables direct comparisons of credibility signals across ratings and labels. Such a marketplace, which could be online and virtual, could improve the accessibility of standardized credibility signals to stakeholders, and enable them to more easily select the agents that best match their preferences. Efforts would be necessary to make this marketplace as inclusive and accessible as possible so that all actors would be able to participate in it.
The reality is that while it is probably smart to send multiple signals of credibility, limited resources may require initiatives to make tough choices if they want these signals to themselves be credible. Given that their stakeholders likely have different signal preferences, they must decide which ones they want to appear credible and accountable to, and which sources and types of legitimacy are their highest priorities. For example, if information-based programs prioritize the credibility and cognitive legitimacy associated with expertise and view themselves as most accountable to experts, then they should emphasize signals of expertise. If they prefer to enhance their trustworthiness and moral legitimacy and build support among civil society organizations, then they should emphasize signals of independence.
A similar logic may also apply to grants of legitimacy. Despite the calls for “stakeholder inclusivity” by both scholars and practitioners,93 having support from very different organizations may send conflicting signals of credibility. An endorsement from Walmart may discredit a program among environmentalists, while an endorsement from Greenpeace may raise (p.96) concerns among conservatives. Audiences may sense a “problem of multiple masters,” which Barbara Romzek (University of Kansas) and Melvin Ingraham (Syracuse University) conclude often leads to managers focusing on one or two of these relationships on a daily basis with the others “being in place but underutilized, if not dormant.”94 As Oxford University political scientist Walter Mattli and Duke University political scientist Tim Büthe find in their case study of U.S. financial accounting standards, private sector agents with delegated public authority may focus more on the interests of their private principals, especially if those principals are internally cohesive and have distinct preferences from other interested parties.95
Thus initiatives may be smart to streamline their grants of legitimacy to improve their value as signals of credibility. As nice as stakeholder inclusivity sounds, inclusivity may not be beneficial for every initiative, and many audiences may prefer information coming from a single sector or organization. Thus the cases that only mention use of resources from a single sector accounted for 34 percent of the cases (21 percent use only private sector resources, 11 percent only civil-sector resources, and 2 percent only public sector resources) may be onto something. However, such an approach does not come without caveats and trade-offs. For example, a major caveat is that this logic applies more to NGOs, given the higher levels of trust that society has in them and the valid concern about companies evaluating their own products. In countries with highly polarized politics such as the United States, government labels may also face strong distrust from citizens who do not support the party in power.96
Trade-offs arise even for NGOs, however, because these single-sector cases may be losing important support from those who do not believe they are “encapsulating” their interests. Businesses may fear betrayal by NGO-driven programs, for example, and NGOs may distrust corporate initiatives. As Magnus Boström explains, it is the “combination and mutual adjustment of interests” that contribute to an image of neutrality and independence.97 The key to working through this trade-off is to think about the broader information ecosystem and how different initiatives and organizations fit into it. In particular, it is important to understand the relative difficulty of their standards (how high they set the bar), a topic to which we will return to in the next chapter. If the only information about a product category is being provided by a multistakeholder initiative with relatively lax standards, then an opportunity exists for an NGO-led program to create (p.97) a program with a higher bar. Likewise, if an industry has only been evaluated by an NGO with very high standards, then a business-led program that enables a larger number of companies to do well may gain traction. We will discuss the relative effectiveness of these approaches in chapter 6, but the point here is that both types can be perceived as credible—if they clearly articulate their mission and strategy, whether it is an inclusive approach with broad participation or a more exclusive approach focused on recognizing the leaders in an industry.
Promising and Problematic Practices
So where does this leave us in terms of promising and problematic trust-building practices? Clearly, not providing any signals of credibility is likely to be the least effective strategy. This approach will leave its audience at best feeling neutral toward it, and at worst actively distrusting it. If it is providing highly desirable and methodologically rigorous information in a highly accessible format, then it may still accomplish its goals, but with a higher risk of failure than if people perceived it as highly trustworthy as well. Without such assurance, users will likely withdraw their support if they detect any reason to mistrust the information. In this situation, rebuilding trust may be much more difficult than creating it in the first place.
By proactively sending credibility signals to their audiences, information-based governance initiatives buy themselves time to prove themselves when their trustworthiness is questioned. Having established who they are accountable to, they can quickly communicate to them why they should still be trusted and how they will fix any problems that have emerged. Chapter 6’s discussion of a controversy surrounding ENERGY STAR provides an example of such a response, and demonstrates the importance of independent data generation and product testing. Generally speaking, organizations that can credibly demonstrate their independence and expertise are likely to earn and retain more trust than those that cannot. However, as discussed earlier, few programs have all of these attributes, and it is not clear which are more important than others. As this chapter explains, it likely depends on who their primary stakeholders are.
Regarding grants of legitimacy, given the greater trust placed in them generally, some association with nonprofit organizations may be more valuable than similar involvement with business and government. This dynamic, (p.98) however, may depend heavily on the context and what organization, what industry, and what kind of relationships are involved. Furthermore, diversity of support across sectors may not necessarily be an asset—some audiences may trust single-sector initiatives more than multisector initiatives. The key to building trust, therefore, is not the quantity of relationships, but their quality—and how clearly that quality is communicated to key stakeholders.
The Trustworthiness of Toilet Paper Certifications
So let us return to Carrie as she considers how to respond to the journalist’s inquiry. She visits the websites of FSC and SFI to look for the signals of credibility they are sending. FSC claims that “independent certification bodies” conduct its assessments; SFI states that it is an “independent” organization whose “cornerstone” is third-party certification. Both are relatively transparent, and provide detailed accounts of their criteria and methods and where their data comes from. SFI provides bios of most of its staff, many of whom have relevant professional and/or academic expertise, while FSC only provides biographical information about its director general, who is a former WWF staff member (SFI’s president and CEO is a former forestry consultant). Both have diverse sets of stakeholders associated with their organizations—through general assemblies, boards of directors, external review panels, funding support, and other mechanisms. FSC’s twelve-member board of directors includes representatives from environmental NGOs and companies, as does SFI’s eighteen-member board, which also includes academic and government representatives, which FSC’s does not.98
While there are some differences, both cases are sending a wide range of credibility signals. Even someone like Carrie who knows the histories and controversies of the organizations might find them equally trustworthy. But this plethora of signals raises the “problem of many masters” discussed earlier in the chapter—with boards of twelve and eighteen members (and for FSC, a general assembly of over six hundred members), who really is calling the shots?99 Carrie might reason that the FSC General Assembly only meets every three years, so it is unlikely its members exercise much direct influence over the day-to-day operations of the organization, and the same is probably true for these organizations’ boards as well. That leaves a significant amount of control to the leaders of the organizations, and here there (p.99) is a clear difference in orientation. Carrie naturally trusts the FSC’s director general, with his background as a fellow environmental advocate, more than the SFI CEO, with her career as a consultant working with “government, trade organizations, and corporations.”
And so she is tempted to tell the journalist that FSC is the more trustworthy organization. But she hesitates, and thinks a little more deeply about the accountability of these organizations. After the authority of the CEO and boards, funding is probably the most influential of the power resources discussed in this chapter. While FSC provides some limited information about its financial support, neither FSC nor SCI provides a full account of where its economic resources come from. The fact remains that both organizations likely rely heavily on the fees they charge companies for their certification services, which brings us back to the inherent conflict of interest highlighted by FSC-Watch that this relationship represents. Companies are motivated to hire certifiers who will indeed certify them, and certifiers are incentivized to in fact do so, with few questions asked. If they do not, companies will find someone else who will.
This logic represents a fundamental barrier to building a trust-based relationship between these organizations and their audiences. This is a challenge not only facing environmental certifications but financial auditors as well. It was at the heart of the Enron scandal in 2001, which caused the disintegration of the accounting firm Arthur Andersen. The Sarbanes-Oxley reforms of 2002 established several new mechanisms to confront this problem, including the Public Company Accounting Oversight Board (PCAOB) to “audit the auditors” and a requirement that auditors be chosen by the audit committee of the board of trustees (and not by management). While these reforms have likely helped increase the accountability of auditors, the accounting industry continues to be implicated in fraudulent accounting practices around the world; for example, in Britain (at Tesco), the United States (at HP), Japan (at Olympus), and China (at China Integrated Energy). Observers have called for a host of further reforms to the accounting industry to increase the independence and trustworthiness of external audits of companies, from expanding the required scope of audit reports to having auditors be selected by shareholder proxy votes, stock markets, or the government.100
All of these practices should be considered in the realm of environmental ratings and certifications as well. They can be implemented by (p.100) individual initiatives or adopted by them collectively. For example, the Fair Labor Association, which focuses on the treatment of workers, already requires that companies not select their own certifiers but are assigned one by the accreditor.101 This allows for standards and costs for certification to be standardized so that shopping around for the cheapest and least rigorous assessor can be eliminated. Another option used by Fair Trade USA is to select one organization (in this case, SCS Global Services) to conduct all certification and auditing activities.102 This eliminates the conflict of interest problem because those being certified are no longer selecting their certifier. Governments, industries, or groups of assessment organizations could encourage or require such practices within their sphere of influence. This approach is based on the ancient principle of nemo judex in sua cuasa—“No person shall be a judge in his own cause”—and is encoded in professional sports, legal doctrine, and academic publishing. It is equally relevant to information-based environmental governance strategies (perhaps even more so given that the ratings and certifications they produce are credence goods), and should be applied to them as well.
Such a principle could be incorporated into efforts by organizations such as the U.S. EPA; the UK Department for Environment, Food, and Rural Affairs; the ISEAL Alliance; and Consumers Union to create standards of behavior for environmental certifications and ratings. These initiatives are laudable, and include many important criteria, some of which we will return to in the next chapter. Many of their criteria that relate to trustworthiness, however, are overly prescriptive. For example, EPA’s final guidelines for its pilot assessment of standards and ecolabels to be included in its environmentally preferable purchasing recommendations to federal agencies suggest that these programs should enable broad participation by affected stakeholders, consider all relevant viewpoints, include a diversity of interests, and work toward achieving consensus in their design process. These guidelines would likely exclude single-sector initiatives that some audiences might find trustworthy and helpful.
An example from our toilet paper case will demonstrate this dynamic. In researching forest products, Carrie comes across several other certifications and ratings of toilet paper (see figure 3.7 for their logos). Two of them, Green Seal and EcoLogo, are somewhat less transparent than FSC and SFI, have similar levels of independence, and do not make any mention of their expertise. Green Seal highlights its associations with a wide (p.101)
range of sectors, while EcoLogo only highlights its governmental connections (to the Canadian government).103 Nevertheless, some consumers might trust a label associated with the Canadian government more than the other multistakeholder initiatives whose accountability they perceive as at best muddled.
A third option Carrie discovers is the “Shopper’s Guide to Tissue Paper,” which provides very limited information about its methods or its associations, and does not appear to have been developed through an open and inclusive process.104 Nevertheless, it provides specific information (buy or avoid, percent recycled, percent post-consumer, bleaching process used) in a relatively usable format, and was produced by the Natural Resources Defense Council’s (NRDC), a well-known environmental organization. No controversies surround this guide, and there is no evidence that NRDC was paid by any of the companies to produce it—unlike all of the other certifications Carrie has reviewed. When it comes to her relative trust in these programs, she may indeed find NRDC’s assessment to be the most trustworthy (although it may not be the most valid because it does not appear to have been updated since 2009—an issue to which we return in chapter 4).
So Carrie then decides to actually put her money where her mouth is and tries to find some toilet paper online that meets her standards. She finds products that are certified by SFI (Charmin UltraSoft), FSC (Scott Naturals), (p.102) EcoLogo (Cascades Enviro), and Green Seal (Atlas Green Heritage). She finds she is less interested in knowing about the organizations behind these certifications than what they are actually certifying (the topic of chapter 2) and how they are certifying it (the topic of chapter 4). If they are claiming to certify that the toilet paper comes from sustainably managed forests, how are they defining and measuring sustainable management? While the question of trustworthiness is important and many people use it as a cognitive shortcut in their decision making, ultimately the question of methodological validity is more important. We will discuss this topic in detail in chapter 4, and return to Carrie’s quandary in that context.
(8.) Much of the information in this chapter is adapted from two articles I published in Political Research Quarterly and Business and Politics (Bullock, “Independent Labels?”; Bullock, “Signaling the Credibility of Private Actors as Public Agents”).
(42.) Darby and Karni, “Free Competition and the Optimal Amount of Fraud”; Nadaï, “Conditions for the Development of a Product Ecolabel”; Nelson, “Information and Consumer Behavior”; Roe and Sheldon, “Credence Good Labeling.”
(p.284) (50.) Most studies do not distinguish between mistrust and distrust, and so I use them somewhat interchangeably in the text here. However, as noted earlier, it is important to recognize that lack of trust can stem from both phenomena.
(57.) Dando and Swift, “Transparency and Assurance”; Boström, “Establishing Credibility”; Mueller, Dos Santos, and Seuring, “The Contribution of Environmental and Social Standards Towards Ensuring Legitimacy in Supply Chain Governance.”
(61.) Banerjee and Solomon, “Eco-Labeling for Energy Efficiency and Sustainability”; Boström, “Establishing Credibility”; Costa et al., “Quality Promotion through Eco-Labeling”; Dando and Swift, “Transparency and Assurance”; Jose and Lee, “Environmental Reporting of Global Corporations”; Maury, “A Circle of Influence”; Nilsson, Tunçer, and Thidell, “The Use of Eco-Labeling like Initiatives on Food Products to Promote Quality Assurance”; Smith, Palazzo, and Bhattacharya, “Marketing’s Consequences”; Starobin and Weinthal, “The Search for Credible Information in Social and Environmental Global Governance”; U.S. Federal Trade Commission, “Proposed Revisions to the Green Guides.”
(68.) “All products” refers to all product types—a sample of products from each product type is selected for testing by the laboratory.
(69.) One-sided Fisher’s exact test = 0.575. Cases implemented by firms include any case in which the organization conducting the evaluation is evaluating its own products or performance. This includes manufacturers as well as retailers who evaluate their own branded products and manufacturer’s products of other companies.
(70.) It should be noted that the Kappa values from the inter-rater reliability analysis (discussed in appendix I) are relatively low for the data peer-review variables. This is likely due to the fact that very few cases have such peer review.
(79.) Fisher’s one sided exact test = 0.015. It should be noted that the inter-rater reliability for two of these expertise-related attributes was among the lowest in the broader set of data collected. The probability that the agreement between the raters was due to chance is between 60 percent and 66 percent for the academic and relevant academic expertise codes, and their Kappa scores were both less than 0 (-0.04 and -0.07, respectively). As appendix I highlights, this may be more due to the low prevalence of these characteristics (the calculated Prevalence Index was 0.88 and 0.84, respectively, for these two criteria), and less due to the reliability of the coding process. However, expertise may indeed be difficult for both coders and more general audiences to recognize and agree on, especially given the many different forms it can take.
(81.) Their framework also includes physical force and legitimacy, but these two forms of legitimacy are beyond the scope of this chapter. While information-based governance strategies by definition do not directly use physical force to accomplish their aims, they may depend on the government’s latent threat of physical force through the enforcement of copyright, trademark, and other relevant laws. For more on this dynamic, see Bullock, “Independent Labels?,” 56. As for legitimacy, while the transfer of power resources reflects an implicit recognition of their legitimacy by the resource providers, a more direct and broad-based measure of the perceived legitimacy of these initiatives would require surveys and interviews with different stakeholder groups that are beyond the scope of this chapter.
(86.) The private sector codes in the EEPAC Dataset are limited to suppliers and retailers, and do not include other types of for-profit firms (e.g., media organizations such as Newsweek, investment firms such as Calvert, or certifiers such as TCO Development) that are associated with these cases. The focus of the coding process was on private sector organizations that are producing or selling products that might be evaluated by these cases (or likely to be evaluated by cases focusing on firm-level sustainability performance).
(91.) Some have described a “third party” as an external group separate from manufacturers, industry associations, and governmental bodies (Gereffi, Garcia-Johnson, and Sasser, “The NGO-Industrial Complex”), while others have described such a party as “external” or “outside” but still paid for by the company (Prakash and Potoski, The Voluntary Environmentalists, 22; Starobin and Weinthal, “The Search for Credible Information in Social and Environmental Global Governance”).
(93.) Boström, “Establishing Credibility”; Dando and Swift, “Transparency and Assurance”; Mueller, Dos Santos, and Seuring, “The Contribution of Environmental and Social Standards Towards Ensuring Legitimacy in Supply Chain Governance”; (p.287) U.S. Environmental Protection Agency, “EPA Environmentally Preferable Purchasing Program Pilot to Assess Standards and Ecolabels for EPA’s Recommendations to Federal Agencies: Final PILOT Assessment Guidelines.”
(103.) These assessments are based on information on their websites in 2009 and 2010.