Jump to ContentJump to Main Navigation
A Metaphysics of Psychopathology$

Peter Zachar

Print publication date: 2014

Print ISBN-13: 9780262027045

Published to MIT Press Scholarship Online: September 2014

DOI: 10.7551/mitpress/9780262027045.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use (for details see www.mitpress.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 16 January 2019

The Objective Within, Not Beyond, Experience

The Objective Within, Not Beyond, Experience

Chapter:
7 (p.99) The Objective Within, Not Beyond, Experience
Source:
A Metaphysics of Psychopathology
Author(s):

Peter Zachar

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262027045.003.0007

Abstract and Keywords

This chapter explores the problem of gaining information about an objective, mind-independent reality while needing to rely on a larger community to know what to accept as true and also remaining committed to fallibilism. It is argued that a workable notion of the objective can be found within experience. The experiential basis of the objective is located in the realization that things are not always what we want them to be or expect them to be. The normative claims about our obligations that follow from this realization motivate an important interest in the concept of “objective knowledge.” Also explored is the notion that the objective and the factual are associated with the experience of rationally compelled acceptance. Being compelled to accept some claims is an important feature of scientific knowledge, but the experience of being compelled is contingent upon the acceptance of epistemological norms.

Keywords:   Objective, Reality, Fact, Fallibilism, Obligation, Normative, epistemology

7.1 Liberal and Conservative Approaches to What We Know

Let us return, briefly, to the concept of being in the middle that was introduced in chapter 3. By the time we are at an age to adopt a critical perspective on our own belief systems, we have already been inculcated with a mass of opinions, ready formed. For an individual there is no going back to the beginning and constructing our belief systems anew any more than we can go back to our first year and live our lives anew.

Not only are we compelled as knowers to acquire habits of belief, it is important to become well practiced in using these beliefs. Maintaining currently held beliefs in the face of discrepant information is no easy task. The example I am thinking of here is Darwin, Wright, and Weismann working to maintain their belief in the importance of natural selection when confronted with discrepant information from physics regarding the age of the universe. Where possible, they relied on the support of others who shared their perspective. Being able to rely on recognized authorities, usually a helpful strategy, was not available to them. Because there was limited community support, each of these men also had to personally commit to and invest intellectual resources in the theory of natural selection.

Typically, when people think of Darwin, they place him in the role of the scientific rebel who possessed the intellectual virtue of curiosity and a liking for novelty in contrast to the defenders of creationism. But Darwin could not have been liberal with respect to creationism without being conservative with respect to something else, namely, scientific naturalism and the theory of evolution by means of natural selection. Having a committed coherent perspective of one's own is what allows someone to see flaws and fallacies in other perspectives. Curiosity and a liking for novelty, if (p.100) undisciplined, are ultimately illiberal in the sense that they do not lead to something new and better; they just lead to the next new thing.

Because no single individual can critically investigate all of the things she or he has come to accept, most of what we accept is a summary and a simplification of other people's investigations. For example, those of us who are not physicists possess only a summary knowledge of particle physics. Even within particle physics, experts in one area have only a summary knowledge of other areas in particle physics. Summaries are necessary because there is too much information for any single person to take account of. We met a version of this idea in chapters 4 and 5, where I argued that summaries of theories in the physical and biological sciences are usually essentialist in nature and that scientific psychologists and psychiatrists often take these essentialist summaries as regulative ideals.

The strength of the conservative strategy is that it can inoculate a thinker from too readily conforming to the latest intellectual fashions. Its major flaw is that it can be a barrier to the advancement of knowledge. Committing to something tends to make us more confident in it than we should be.

As queried at the end of the last chapter, if being in the middle means that we are potentially trapped in a nightmare of consensus without correspondence to reality, what becomes of the metaphysical concept of the objective as that which is mind-independent? This chapter provides an answer to that question from the standpoint of radical empiricism. In short, even if our best, most coherent thinking must adhere to the habits of our chosen communities, the concept of the objective (and of facts) does some good work. The work being done by this chapter is going to draw on the history and the philosophy of science. Issues in psychiatry take center stage in the chapter that follows.

7.2 Conceptualizing the Objective

Let us begin with two of the instrumental nominalist strategies that were introduced in chapter 3 for understanding lofty philosophical abstractions. The first strategy is to specify a contrast concept. The contrast concept to objectivity is subjectivity. This subject-versus-object distinction is usually credited to Descartes, but much of its appeal is parasitic on an earlier distinction between appearance and reality bequeathed to us by Plato. For example, it has been said that the goal of science is to lift a corner of the (p.101) great veil to see the reality beyond. The great veil, a phrase of Einstein's, is the subjective veil of ideas. Reality is the mind-independent world of external objects.

Many philosophers have adopted the second strategy and decomposed objectivity into overlapping concepts. For example, Heather Douglas (2004) defines eight different senses of objectivity and argues that they are too diverse to be contained by a single concept. Exploring the many ways that philosophers have decomposed the concept of objectivity, however, would take us too far afield. To better focus the discussion that follows, I begin by emphasizing a contrast between psychological objectivity and metaphysical objectivity.

Miriam Solomon (2001) defines objectivity psychologically as a cold, rational process that is not subject to bias. For instance, consider the supervisor who favors certain employees over others. The supervisor exaggerates and even makes up the achievements of favored employees whereas she/ he minimizes those of disfavored employees. These evaluations are subjective. A less biased or more fair and “objective” approach, it is said, would give each employee his or her due. Usually this kind of objectivity involves developing policies and methods of evaluation that are applied to everyone in the same way.

Bas van Fraassen (2002) splits this psychological concept of objectivity into two subtypes. The first is distancing or making something into an impersonal object of study. One of van Fraassen's examples of distancing is the clinical attitude of a physician who focuses on the disease as a problem to be solved rather than the patient as a person. This kind of objectivity is what allows surgeons and emergency room doctors to maintain a professional distance from their patient's pain.

Van Fraassen's second kind of objectivity is called neutralization. An epistemological sibling of equality before the law, neutralization involves approaching a topic without evaluative presuppositions. For example, in theory, a feminist and an ardent male chauvinist could approach the study of the mathematical abilities of college age women the same way.

One should not confuse these objective attitudes with the philosopher's metaphysical concept of “the objective” as what is mind-independent. Someone can become so distant and neutral that he or she distorts reality or be biased in such a way that new aspects of reality are revealed. This becomes pretty clear after one has had some experience as a therapist and more (p.102) readily adopts an empathic stance. People who are very problem focused rather than empathic sometimes miss the reality of what is before them. Likewise, in agreement with many feminists, Solomon notes that cognition driven by emotional investment and commitment can help someone persevere in the face of adversity and thereby contribute to the discovery and justification of new, objective knowledge. Rather than an objective attitude, the more metaphysical concept is what concerns us here.

The Limits of Agreement

It is important to be cautious about taking the veil of ideas metaphor too literally. For a radical empiricist experience is not a veil of distortion that needs getting beyond. According to such an empiricist we can justify making distinctions between subject versus object and appearance versus reality, but those distinctions are made within experience.

Staying within the boundaries of experience does not limit an empiricist to only a psychological conception of objectivity. For instance, when the Royal Society of London was formed in the seventeenth century, experiments were demonstrations that occurred in the presence of observers (Shapin & Schaffer, 1985). It was also practical to allow demonstrations to occur at different times, and the agreement among temporally (and spatially) distributed observers was codified in the norm of replicability—or the practice of checking each other's work (Hull, 1988). What survives this sometimes competitive checking process is more than a subjective psychological attitude because it has a public aspect. As a matter of fact, during the heyday of logical positivism, the publically available outcome of agreement between observers—or the intersubjective—became the empiricists' proxy for the objective (Feigl, 1958).

As the history of science demonstrates, the problem with reducing the objective to the “in-principle-intersubjectively confirmable” is that agreement can be wrong, particularly when agreement also involves making inferences about what is observed. Astronomers in the thirteenth century accepted epicycles and crystalline spheres, chemists in the eighteenth century accepted phlogiston, physicists in the early nineteenth century accepted that light could not be a particle because it was proven to be a wave, and biologists in the mid–nineteenth century accepted the transmission of traits between generations by transfer of blood. They were all mistaken. Epicycles and crystalline spheres never existed, phlogiston was (p.103) always a theoretical fiction, light has always had wave-like and particle-like properties, and the blood has never been the vehicle for the transmission of hereditary information to the next generation.

One thing that can be learned from the study of history is that some things are true independent of what communities believe. It is therefore reasonable to infer that there are some things that, although they contradict the current scientific consensus, will be accepted as true by a future community of scientists and that they are as true right now as the Copernican theory was true in 4000 BCE. One can accept this historically informed inference without imagining a getting beyond the veil of ideas.

This historically informed notion of the objective as “what is true independent of what a community believes” points to two norms to which a radical empiricist can adhere and thereby make use of the concept of the objective. These overlapping norms are:

  • We should classify the world as it is, not how we want it to be.
  • We should try to not fool others and not fool ourselves.

Consistent with the values flagged in the two preceding “should statements,” Helen Longino (1990) observes that objectivity involves a willingness to let our beliefs be determined by the facts rather than by our wishes of how things ought to be. Another way to articulate this epistemic value is to say that we should accept that evidence can compel belief.

Along similar lines Putnam (1990) has claimed that there are no facts without obligations. This may seem like a strange claim, but the concept of a fact is closely tied to the concept of an obligation. We can see this relationship whenever one person calls something a fact as a way of telling another person that he or she cannot rationally choose to reject it. Facts are things that we are obligated to accept.

In what follows I argue that the empiricists' attempt to conform to these two norms provides them with a workable conception of the objective as something that is mind independent but also a conception of the objective that resides within, not beyond, experience.

7.3 Compelled to Believe

Beginning in the 1970s the advocates of the cladistic approach to taxonomy famously touted its objectivity. They claimed that in traditional biological classification, taxonomists relied on their expert opinions to decide which (p.104) traits should be considered definitional. This practice, they said, introduced an arbitrary observer-dependent element into taxonomy. For example, Linnaeus chose to classify plants by comparing and contrasting reproductive organs rather than leaf shape or root style. In the Linnean system all flowers with eight stamens were placed in one group, those with nine another.

In the generations that followed Linnaeus taxonomists continued to study similarities and difference between organisms and, like Linnaeus, made judgments about which contrasts were most important in seeing how things fit together. The problem was that rather than there being shared rules of comparison, people had different intuitions about what characteristics best indicate how things fit together. On the issue of fit some taxonomists preferred precise distinctions and the elucidation of many groups (splitters), and others preferred fewer distinctions and larger groups (lumpers). One's preferences rather than facts seemed more important in choosing how “granular” groups should be.

During the cladistic revolution in taxonomy, the cladists sought to eliminate the role of intuition and preference. According to them classification should be based only on shared derived characteristics. A shared derived characteristic is possessed by two or more species that are descendants of a common ancestor in which that characteristic first appeared. For example, birds and alligators are more closely related to each other than either is to a turtle because they share a four-chamber heart, which they both derived from a common ancestor. A four-chambered heart is a characteristic unique to that lineage. According to the cladists such characteristics offer a single standard of comparison that reflects actual evolutionary branching. The subjectivity of taxonomy is thereby attenuated.

Note that I write attenuated, not eliminated. The cladists' claim that only branching patterns should matter in developing a taxonomy is itself a decision to adhere to a new convention, not a discovery of fact. According to Ernst Mayr (1988), such branching patterns are blind to the degree of divergence that different populations have from a common ancestor, which is the kind of information that is important for understanding evolution.

Once cladistic conventions are accepted, however, scientists are compelled to reject some familiar groupings no matter what they may prefer to be the case. As all students of biology learn, the metaphysically aggressive cladists boldly claimed that only the “real” groups contain an ancestor and all its descendants. A famous consequence of this classificatory convention (p.105) is that the class reptile is considered an “artificial” not a “real” group because the collection of reptiles does not contain all the descendants of the whole groups' most recent common ancestor (such as birds). A similar fate befell fish, dinosaurs, and the great apes (Yoon, 2009; Zachar, 2008).

Wanting to versus Being Compelled to Believe

If one puts aside the metaphysical abstractions of the cladists regarding “real” groups, one of their most persuasive claims was that the scientific ordering of life forms can be improved on (Hull, 1988). No matter how much biologists may prefer to keep the old familiar categories such as reptile and fish, the cladists argued that taxonomists should give them up in favor of more evidence-based and historically grounded relationships.

To take the hint, one way to clarify the notion of “being compelled by evidence” is to examine instances in which a person accepted something that he or she did not want to believe. For example, the English poet John Donne—who eventually became an Anglican priest—believed that the Copernican theory was likely true and that the truth of it was a tragedy (Kuhn, 1957). For Donne the evidence favoring the Copernican theory was an inconvenient truth that he felt compelled to accept.

In 1844 Darwin wrote to Hooker about his work on evolution, claiming that admitting the mutability of species was like committing a murder (Desmond & Moore, 1991). In the early stages of his formulation of evolution by means of natural selection, Darwin was likely hoping that he had foolishly gotten himself onto a dead-end path and that his inconvenient theory would turn out to be mistaken. Before he made his decisive break with Christianity subsequent to his daughter's death, Darwin felt compelled to accept this dangerous idea.

It is often pointed out that before heliocentrism was officially banned by the Catholic Church, Cardinal Robert Bellarmine asserted that if there were proof that the Earth orbits the Sun, then those scriptural passages that suggest otherwise would have to be carefully reinterpreted (Blackwell, 2002; Shea, 1986). Assuming he was not referring only to the kind of purely logical proof used in geometry, this was an admission that evidence can override preferences. Given that the name of Bellarmine is historically associated with forcefully compelling others to believe (or die), one has to wonder if he would have allowed the weight of evidence to override dogma, but we can at least hope so.

(p.106) Scientists may point to cases like that of Donne and Darwin and associate compelled to believe with the scientific attitude, but that itself is a romanticized view. An obvious example of a romanticized view of science can be found in early psychoanalysis. Early in the twentieth century psychoanalysts claimed that the evidence for the Oedipus complex was so compelling that only those who did not want to see the world the way that it really is could continue to deny Oedipal reality. The psychoanalysts saw themselves as modern heroes who had the fortitude to face unpleasant truths and challenge the conventional authorities. They seemed to believe that the act itself of challenging authority supported their claim about the scientific status of Oedipal dynamics. Even among psychoanalysts, this claim is no longer taken seriously. Currently, the universal and literal truth of the Oedipus complex is not accepted and looks to have always been highly implausible and noncompelling.

The notion of being compelled to believe can be so idealized that people see themselves as being compelled when in fact they are not. Many, for instance, think that they are compelled to believe in God. Some of those who make this claim do not question and never doubted the inherited views of their religious community—be they Roman Catholic, Seventh-Day Adventist, Orthodox Jew, or Sunni Muslim. In these cases, because they want to believe, it becomes difficult to say whether they are also compelled. The problem is that what one wants can affect what one takes to be good reasons. Nor does being compelled apply to cases where people change their beliefs because it is easier to conform to group pressure.

More relevant to the notion of “compelled to believe” is the convert who did not believe or does not want to believe but feels compelled to do so as result of some kind of experience or insight. Compelled to believe in this sense can be used to support the adoption of agnosticism and atheism as well. Consider the person who once believed in God or a religious tradition but came to reject that faith. I refer here not to someone who “lapsed” in favor of other interests or someone who “lost” faith as an angry reaction to mistreatment and thinks of herself as “in recovery.” The relevant contrast with the religious convert is the person who was quite satisfied with the tradition and does not regret it but became unable to accept it no matter how inconvenient being in a state of apostasy might be to her.

So the abstraction “compelled to believe” does not demarcate scientific from nonscientific assent. Nor is it an exclusively rational process because obligation is involved and feeling compelled is part of the experience. That (p.107) does not mean, however, that the concept does not do any good work in the philosophy of science. The idea of scientific progress still includes the notion that the acceptance of scientific truth claims is not a relativistic preference or a personal choice; it is “compelled” in some way. In what follows I attempt to particularize the notion of compelled to believe a bit more and, in doing so, introduce a minimal empiricist concept of the objective. The basic idea is that we rely on conventions, rules, and standards for deciding what we are compelled to believe.

7.4 What Resists Our Wishes?

The concept of the objective from a contemporary empiricist standpoint is easy to understand and can be introduced using something that is learned by all undergraduate science majors—the logic of experimental methodology. We begin by randomly assigning people with a depressive disorder to three groups. The first group receives an active treatment. The second group receives a placebo in which the patients believe they are receiving an active treatment but are not (a hope and expectation condition). The third, called the control group, is left alone. At the conclusion of the study if the treatment group has recovered and the placebo and no-treatment control groups have not, we can conclude that the recovery is due to the treatment. In the framework of these experimental conventions, we are compelled to accept that the treatment condition is superior to the placebo and no-treatment control conditions.

What I just described is an idealized situation—the results of actual experiments are rarely, if ever, so unambiguous. Especially in psychiatry when the result refers to various degrees of recovery rather than “alive” versus “dead,” additional factors such as treatment side effects and long-term consequences should not be ignored in evaluating the comparisons. What is important is that an experiment sets up a competition between different conditions, and the point of the test is to help us decide, in a rule-bound way, if one of the conditions has competitive superiority with respect to the goals of treatment.

An important feature of the experimental framework is that the outcome is designed not to be under the control of the researcher. Additional design features such as making the experimenters blind to which groups the participants are assigned to are a way to ensure that the experimenters do not have their thumbs on the scale, so to speak. Scientists want their (p.108) hypotheses to be supported or in some cases want others' hypotheses to not be supported, and “empirical” tests must be able to resist such preferences. Much of what scientific communities do is to develop descriptive theories and causal models of how their chosen domain fits together, but crucial to this process is finding frameworks in which expectations are put to the test in such a way that they can meet resistance. Correlational and qualitative studies can also be “empirical” in this way as well.

Preferences and wishes are conceptualizations of how we would like the world to be. Experiences (of how the world is) that elude our preferences are experiences of the objective. What resisted our wishes in the past but no longer offers resistance also falls under this experiential notion of the objective. For instance, the occurrence of shingles (herpes zoster) in adults is a reemergence of a virus (varicella zoster) that had caused chickenpox in childhood and has been lying dormant in sensory nerve cell bodies. It would be convenient to eradicate this virus so that shingles would never appear, but this is not currently possible. The world resists our wishes. If we learned how to overcome this resistance and eradicate the virus from the body, such knowledge would thereafter bear the imprint of the experience of the objective.1

Let us return to James (1909/1975) to see what he had to say on this issue.

That reality is “independent” means that there is something in every experience that escapes our arbitrary control. If it be a sensible experience, it coerces our attention; if a sequence, we cannot invert it; if we compare two terms we can come to only one result. There is a push, an urgency, within our very experience, against which we are on the whole powerless, and which drives us in a direction that is the destiny of our belief. (p. 211)

In the last sentence of this quote James adopts an end of inquiry frame of reference in talking about destiny. We do not have to follow him in this. The main point of the James quote for our purposes is that the objective as what resists our wishes is a distinction that occurs within experience.

As noted previously John Donne did not want the Sun to be at the center of the solar system. Galileo did. Galileo's wanting the Copernican model to be true, however, did not compromise what we now see to be its objectivity. As Miriam Solomon noted, commitments like those of Galileo can contribute to the discovery of what is objective by helping proponents (p.109) persevere in the face of adversity. The resistance to preference notion, therefore, is not being proposed as an essential feature of the objective. In the history of thought, particularly after Galileo, many people have made claims about the world that resisted people's preferences. Many of them have also believed that the very act of saying something that others did not want to be true was itself a sign of validity. It was not.

The metaphysical concept of the objective, however, is a useful tool for understanding experiences of resistance to preference. The concept of the objective is partly inspired by and reappears with the recurrence of such experiences in one or more members of a community, but it is not constituted by them. Whenever people start talking seriously about the objectivity of such things as the Copernican model, the Apollo moon walks, or global warming, the notion that someone's preferences are being resisted is not far away.

The resistance to what we prefer is not The Objective in an elaborate metaphysical sense. Metaphysical elaborations go beyond their experiential bases, but nevertheless, taking account of those experiences is useful for bringing the lofty concepts down to earth. Something important occurs when the world is not the way we want it to be, but that is a very minimal, even deflated, notion of the objective—one that does not require getting outside of experience.

7.5 The Concept of a Fact and Its Contrasts

Closely associated with the concept of the objective is that of an empirical fact. As for the objective, an important part of the concept of a fact is the presence of a public or intersubjective aspect. What Holmes said to Watson the morning after they dispatched Colonel Sebastian Moran was never a fact, but what Conan Doyle ate and drank on the day he finished The Adventure of the Empty House was a fact once, although it is likely no longer even a potential fact because it is not publically ascertainable. That information has been lost.

Hacking (1999) has noted that the philosophical concepts such as “reality,” “truth,” “objectivity,” and “facticity,” tend to be defined in a circular manner. For example: facts are states of affairs whose “reality” we are obligated to accept; facts are what “true sentences” refer to; and facts are what is objectivity (p.110) true. Such circular definitions make the concept of a fact somewhat obscure. As might be expected of a philosophical abstraction, there are many particular facts, and it is unlikely that all these facts share a single nature called facticity that is equally present in each instance. Falling under the general concept of fact are physical facts such as the height of the Empire State Building (443.2 meters), historical facts such as the birth date of Darwin (February 12, 1809), and sociocultural facts such as the number of countries presently on the European continent (fifty).

One popular contrast concept for “fact” is “theory,” but this distinction is not absolute.2 For example, scientific facts may be theory-laden as some philosophers claim, but scientific theories are also fact-laden (Barnes, Bloor, & Henry, 1996; Mermin, 2001). In modern biology evolution has such extensive empirical support that it is considered a fact, whereas the precise role of natural selection alongside other evolutionary mechanisms is considered more theoretical. Other “theories” that can be considered factual include the Bohr model of the atom and the theory of continental drift. In contrast, astrology and the Oedipus complex are so unsupported by fact that they are not considered scientific theories.

Another popular contrast for “fact” is “value.” Although not an absolute distinction, it is a good one to make primarily because some people's values are so strong that they see what they want to see and “distort reality.” Holocaust deniers are a good example of such distorters. No matter how much documented visual evidence is presented to them, they continue to see the world the way they want it to be.

Shapin and Schaffer (1985) argue that after the violent disputes between the Protestants and the Catholics in seventeenth-century England, one of the advances offered by the scientific revolution was the belief that people can hold different values, but they should agree on matters of properly demonstrated fact. Joining a scientific community involves learning what conventions, rules, and standards are to be relied on for what counts as a demonstrated fact. In radical empiricist terms to say that something is objective is to say that it bears the imprint of resisting wishes and preferences. To call something a fact is to make a claim about what we are obligated to accept.

According to the sociologists of scientific knowledge, when a community agrees to not be skeptical about a regularity and to take it for granted, that regularity is considered to be a fact. Social constructionists might say that something becomes a fact, but their language is much too ambiguous here.

(p.111) When scientists develop a new way of observing, be it a telescope or a psychological test, the users of those instruments learn to see “facts” that were not obvious before, but they do not make them up. One can also say that the conditions under which objectivity appears may be constructed, but the objectivity is not constructed. In some cases it is neither sought nor wanted.

7.6 The Asserted Versus the Actual

The minimalist notion of the objective that I have offered is quite different from what many people think of as the objective. The same is true for the related concept of a fact. For example, Alan Sokal (2008) criticizes the sociologists of scientific knowledge for failing to distinguish between an assertion of fact and an actual fact. By actual fact he means a situation in the external world that exists irrespective of the knowledge that we have (or do not have) of it—in particular, irrespective of any consensus or interpretation (p. 219). If taken at face value, this looks like a pretty good definition. However, defining the concept of a fact in terms of the concept of objective existence is a metaphysical obscurity. It should not be construed as philosophical clarity.

These are delicate issues. To justify the distinction between asserted fact and actual fact, Sokal uses the Ptolemaic theory that the Sun revolves around the Earth. According to Sokal, philosophical relativists assert that astronomers once took the Sun's motion around the Earth to be a fact, but after the Copernican theory was accepted, the facts changed. Sokal argues that the facts did not change, only what was asserted to be a fact changed. He also notes that in declaring that the Earth's motion around the Sun is a fact, we are saying that the heliocentric model describes things the way they are irrespective of what anyone wants to believe about it. A radical empiricist readily agrees with Sokal here.

One way in which Sokal and the radical empiricists differ is that Sokal takes these claims and moves them outside of history and outside of experience. Empiricists will agree that the Ptolemaic thinkers were mistaken and that the factual status of the Copernican theory obligates acceptance, but they also hold that such claims are all made from within experience. Consider the following distinctions:

  • Asserted fact versus actual fact
  • Truth claim versus valid truth
  • Avowedly objective versus really objective

(p.112) These are all good distinctions to make, but the point of the contrasts is that asserted facts, truth claims, and avowedly objective should be subject to doubt, and even considered mistaken.3 What about actual fact, valid truth, and really objective? The terms “actual,” “valid,” and “really” are being used to make claims about what lies outside of history and beyond experience. In Arthur Fine's (1986) terms they are acts of desk-thumping and footstomping. According to empiricists, metaphysical elaborations such as “real fact” are not needed once there are accepted conventions, rules, and standards for making the distinction between asserted fact and fact. Likewise, there is no need to shout out “valid truth” once the distinction between asserted truth and truth can be made.

Holocaust deniers and young-earth creationists will use the abstract concept of asserted versus actual fact and truth claim versus valid truth just as readily as the scientist. In this sense the use of transcendent metaphysical language does not distinguish these different communities. What does distinguish them is what they take to be evidence, what they take to be standards of justification, and the tradition of past successes and failures that feed into the claims they are making. Rather than stepping outside of history into the transcendent realm of universal truth and objectivity, examining the tradition of past success and failures involves looking backward and seeing if and where progress has been made.

7.7 Conclusions

We are now in a position to address the worry in chapter 6 about being trapped within a nightmare of consensus. There is too much information for any one of us to master or to test out for ourselves, and most of what we know relies on what we have already learned and what we accept from others. Many philosophers of science refer to this as the social dimension of knowledge and contend that a social element permeates all that we know. Arguably, the articulation of the social context of knowledge and of objectivity is one of the important philosophical advances of the twentieth century (Fleck, 1935/1979; Kuhn, 1957; Solomon, 2001).

According to Longino (1990), the ability of scientists to track objectivities is supported by the adoption of social conventions such as these:

  • There must be recognized avenues for criticism
  • There must be public, agreed-on standards of scientific adequacy
  • (p.113) The community must be genuinely open to criticism
  • Intellectual authority must be partitioned throughout the community

Rather than only being a barrier to the advancement of knowledge, being a member of a community can make such an advancement possible, but the community must be one that is not too self-assured. That a scientific community should be open to critically examining its own stock of ready formed beliefs is emphasized in the following quote by Sellars (1956): “For empirical knowledge, like its sophisticated extension, science, is rational, not because it has a foundation but because it is a self-correcting enterprise which can put any claim in jeopardy, though not all at once” (p. 300).

Very crucially, one can accept the social embeddedness of knowledge along these lines but also leave some room for the importance of individuals. Part of the encounter with reality involves acknowledging the particularities that elude our concepts. Following Philip Kitcher (1993), therefore, one can make a useful distinction between social interactions and asocial interactions. “An encounter with reality” can also be a particular, asocial experience.

The work of Darwin was asocial in this way, but it was asocial partly because of Darwin's social situation. In 1844 a book titled Vestiges of the Natural History of Creation was published anonymously by the Scotsman Robert Chambers. James Secord (2000) has referred to this book as a “Victorian sensation.” What was sensational about the Vestiges was that it described cosmological theories about the evolution of the solar system and the development of the Earth but also described the natural development of life from simpler to more complex forms—including human life. Prior to this book's publication, theories of evolution were associated with cranks, quacks, and political radicals. What the Vestiges did was to firmly implant in the public mind an idea of a natural history that differs significantly from the Genesis account.

The Vestiges did for Great Britain's intellectual discourse what the Salon culture had done for France sixty years earlier, even more so because the Vestiges was consumed by the whole reading public, irrespective of social station. In the same year that the Vestiges appeared Darwin first formulated the theory of natural selection. He had already decided not to publish anything about natural selection until he had more evidence, and Chambers' book was such a scandal that his decision was solidified (Gregory, 2008). He kept his new theory to himself and largely worked on it alone. It was asocial (p.114) work. The factors that led to On the Origin of Species becoming as important as it did, however, were also social.

Presumably some individuals, those we might even call “objective,” are more likely than others to attend to the experiences that elude their concepts and resist their preferences. B. F. Skinner (1956) once claimed that when something interesting or unexpected happens, stopping what you are doing and studying it is a trait of a good scientist. Communities that regularly make progress are able to assimilate and accommodate such individuals, but the community is the final common pathway regarding what is taken for granted.

Notes:

(1) . For example, the virus can now be suppressed by a vaccine.

(2) . Fact versus fiction (the counterfactual) is another distinction.

(3) . To say that calling something a fact is an assertion about what we are obligated to accept is one thing. To say something is an asserted fact has a completely different meaning.