Jump to ContentJump to Main Navigation
The Consciousness ParadoxConsciousness, Concepts, and Higher-Order Thoughts$

Rocco J. Gennaro

Print publication date: 2011

Print ISBN-13: 9780262016605

Published to MIT Press Scholarship Online: August 2013

DOI: 10.7551/mitpress/9780262016605.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 08 March 2021

In Defense of the HOT Thesis

In Defense of the HOT Thesis

Chapter:
(p.11) 2 In Defense of the HOT Thesis
Source:
The Consciousness Paradox
Author(s):

Rocco J. Gennaro

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262016605.003.0002

Abstract and Keywords

This chapter begins with a defense of the HOT Thesis, stating that a version of the HOT theory is true and thus a version of reductive representationalism is true. Current theories of consciousness attempt to reduce it to mental representations of some kind, which philosophers often refer to as intentional states; these states have representational content, i.e. mental states that are “about” or “directed at” something. Before a defense can be established, it is important to explain several iterations of representationalism and make a case for the reductionist approach to consciousness. Searle’s well-known Connection Principle is examined as well and its critical examination used to argue the point that intentionality is partly prior to consciousness. The chapter concludes with an exploration into the nature of mental content in light of the HOT theory.

Keywords:   reductive representationalism, HOT Thesis, theories of consciousness, mental representations, intentional states, representational content, reductionist approach, Searle, Connection Principle

In this chapter, I begin a defense of the HOT Thesis, namely, that a version of the HOT theory is true and thus a version of reductive representationalism is true. This first involves explaining several flavors of representationalism (sec. 2.1), as well as making a case for a reductionist approach to consciousness (sec. 2.2). In section 2.3, I argue that intentionality is prior to consciousness partly via a critical examination of Searle’s well-known Connection Principle. In section 2.4, I offer an initial defense of HOT theory. Finally, in section 2.5, I explore further the nature of mental content in light of HOT theory.

2.1 Varieties of Representationalism

Some current theories of consciousness attempt to reduce it to mental representations of some kind. The notion of a representation is, of course, extremely general and can be applied to photographs, signs, and various natural objects, such as the rings inside a tree. Much of what goes on in the brain, however, might also be understood in a representational way, for example, as mental events representing outer objects partly because they are caused by those objects. Philosophers often call these states intentional states that have representational content, that is, mental states that are “about” or “directed at” something such as a thought about a house or a perception of a tree.

The view that we can explain conscious mental states in terms of representational or intentional states is called representationalism (or intentionalism). Although not automatically reductionist in spirit, most versions of representationalism do indeed attempt such a reduction. Most representationalists, such as higher-order (HO) theorists, think that there is then room for a “second-step” reduction to be filled in later by neuroscience. One motivation for representationalism is that a naturalistic account of intentionality (p.12) can arguably be more easily attained, such as via causal theories whereby mental states are understood as representing outer objects by virtue of some reliable causal connection. The idea, then, is that if consciousness can be explained in representational terms and representation can be understood in purely physical terms, then there is the promise of a naturalistic theory of consciousness. A representationalist will typically hold that the qualitative properties of experience, or qualia, can be explained in terms of the experiences’ representational properties. The claim is that conscious mental states have no mental properties other than their representational properties. Two conscious states with all the same representational properties will not differ phenomenally. For example, when I look at the blue sky, what it is like for me to have a conscious experience of the sky is simply identical with my experience’s representation of the blue sky.

I cannot fully survey here the dizzying array of representationalist positions (Chalmers 2004; Lycan 2005). I believe that the most plausible form of representationalism is what has been called strong representationalism. It is basically the view that having representations of a certain kind suffices for having qualia and thus for conscious mental states. It is sometimes contrasted with weak representationalism, which is the view that conscious experience always has representational content of some kind.

It is also important at the outset to distinguish the content of a mental state from the state or vehicle that has the content. This is the difference between what is represented, or what the state is about, and what is doing the representing. Two other pairs of distinctions involve how best to characterize, first, the mental contents in question and, second, the kinds of properties represented.

  1. (1) Wide representationalism holds that “both phenomenal properties and the representational properties they are equivalent to are taken to depend on a subject’s environment” (Chalmers 2004, 165). This is the view of most representationalists, including Dretske (1995), Tye (1995), and Lycan (1996). It has its roots in the literature on propositional attitudes, such as beliefs and thoughts, which has been taken to show that two physically identical subjects with different environments will have different mental contents (Putnam 1975). For example, a belief about water on Earth will be about H2O, whereas it will be about XYZ on “Twin Earth.” The main idea is that the content (or meaning) of one’s mental states depends on one’s environment. In contrast, narrow representationalism is the view that phenomenal properties, and the representational properties they are equivalent to, depend on a subject’s internal state, so that molecular duplicates will necessarily share mental contents. Narrow representationalists think that (p.13) molecular duplicates share something significant, even if there are other differences when the relevant mental states are individuated widely.

  2. (2) Within narrow and wide representationalism, one might also disagree about what kinds of properties are represented. For example, one natural way to think of mental content involves objects and properties in the world. Following Russell, they have been called Russellian contents (Chalmers 2004). The concepts involved in a belief, for example, have extensions, namely, objects and properties that are picked out by the concepts. If I believe that Venus is the second-closest planet to the Sun, then my belief is directed at Venus. On the other hand, following Frege, one might suppose that there are also Fregean contents. Mental contents are composed of concepts, which not only have extensions but also have modes of presentation, or what might best be described as “a way of thinking about the referent.” This mirrors Frege’s well-known distinction between reference and sense. So, according to this view, the belief about Venus also has the mode of presentation “second planet to the Sun,” which is the way that I am conceiving of Venus in that case. Fregean content can differ while the Russellian content remains fixed. I may alternatively believe that Venus is the Morning Star, which involves a different mode of presentation. I return to these distinctions in section 2.5.

For now, it is worth briefly introducing three common flavors of representationalism, each of which I discuss at greater length in later chapters. The central question that should be answered by any theory of consciousness is: What makes a mental state a conscious mental state? That is, what differentiates unconscious mental states from conscious mental states?

2.1.1 First-Order Representationalism (FOR)

First-order representational theories of consciousness refer to theories that attempt to explain conscious experience in terms of world-directed (or first-order) intentional states. Two frequently cited FO theories are those of Dretske (1995) and Tye (1995, 2000), though there are many others as well (Byrne 2001; Thau 2002; Droege 2003). Like other FO theorists, Tye holds that the representational content of my conscious experience (that is, what my experience is directed at) is identical with the phenomenal properties of experience. Aside from reductionistic motivations, Tye and others often invoke the notion of the transparency of experience to support their view (Harman 1990). This argument derives from Moore (1903) and is based on the phenomenological first-person observation that when one turns one’s attention away from, say, the blue sky and onto one’s experience itself, one is still only aware of the blueness of the sky. The experience itself is (p.14) not blue; rather, one “sees right through” the experience to its representational properties, and thus there is nothing else to one’s experience over and above such properties.

As we will see in chapter 3, FO theorists believe that much the same goes for all kinds of conscious states, including pains and emotions.

2.1.2 Higher-Order Representationalism (HOR)

Another tradition has attempted to understand consciousness in terms of higher-order awareness. For example, some cite John Locke (1689/1975), who once said that “consciousness is the perception of what passes in a man’s own mind.” This is a bit misleading because, unlike HO theorists, Locke did not believe in unconscious thoughts.1 In general, the idea is that what makes a mental state conscious is that it is the object of some kind of higher-order representation (HOR). A mental state M becomes conscious when there is a HOR of M. A HOR is a metapsychological or metacognitive state, that is, a mental state directed at another mental state. So, for example, my desire to write a good book becomes conscious when I am (noninferentially) “aware of” the desire. Intuitively, it seems that conscious states, as opposed to unconscious ones, are mental states that I am aware of in some sense. Any theory that attempts to explain consciousness in terms of higher-order states is known as a higher-order (HO) theory of consciousness. HO theories thus attempt to explain consciousness in mentalistic terms, that is, by reference to notions such as “thoughts” and “awareness.” We might say that conscious mental states arise when two unconscious mental states are related in a certain specific way, namely, when one of them (the HOR) is directed at the other (M).

There are various kinds of HO theory, with the most common division between higher-order thought (HOT) theories and higher-order perception (HOP) theories. HOT theorists, such as David Rosenthal (1997, 2005), think it is better to understand the HOR as a thought of some kind. HOTs are treated as cognitive states involving conceptual components. HOP theorists urge that the HOR is instead a perceptual or experiential state (Lycan 1996) that does not require the kind of conceptual content invoked by HOT theorists. Although HOT and HOP theorists agree on the need for a HO theory of consciousness, they also often argue for the superiority of their respective positions (Lycan 2004; Rosenthal 2004).

2.1.3 Hybrid Representational Views

A related group of representational theories holds that the HOR in question should be understood as intrinsic to (or part of) an overall complex conscious (p.15) state. This stands in contrast to Rosenthal’s standard HOT theory, where the HO state is extrinsic to (that is, entirely distinct from) its target mental state. The assumption about the extrinsic nature of the HOR has increasingly come under attack, and thus various hybrid representational theories can be found in the literature. Another motivation for this movement is renewed interest in a view somewhat closer to the one held by Franz Brentano (1874/1973) and various followers often associated with the phenomenological tradition.2 To varying degrees, these hybrid views have in common the notion that conscious mental states represent themselves in some sense.

As was noted in the previous chapter, I have argued that when one has a first-order conscious state, the HOT is better viewed as intrinsic to the target state, so that we have a complex conscious state with parts (Gennaro 1996, 2006a). This is what I have called the wide intrinsicality view (WIV). Very briefly, we might say that conscious mental states should be understood (as Kant might have today) as combinations of passively received perceptual input and higher-order conceptual activity directed at that input. Higher-order concepts in metapsychological thoughts are presupposed in having first-order conscious states. I say much more about the WIV in chapter 4.

Another hybrid approach is advocated by Uriah Kriegel and is the subject of an entire anthology debating its merits (Kriegel and Williford 2006). Kriegel has used several different names for his “neo-Brentanian theory,” such as the “same-order monitoring theory” and the “self-representational theory of consciousness.” To be sure, the notion of a mental state representing itself or a mental state with one part representing another part needs further development. Nonetheless these authors agree that conscious mental states are, in some important sense, reflexive or self-directed. I criticize Kriegel’s view in chapter 5.

Robert Van Gulick (2000, 2004, 2006) has also explored the alternative that the HO state is part of an overall conscious state. He calls such states “higher-order global states” (HOGS) whereby a lower-order unconscious state is “recruited” into a larger state, which becomes conscious partly due to the implicit self-awareness that one is in the lower-order state. Van Gulick has also suggested that conscious states can be understood materialistically as global brain states.

2.2 Defending Reductive Representationalism

2.2.1 Reduction and Explanation

Although it is possible to be a nonreductive representationalist (Chalmers 2004), most representational theories of consciousness are reductionist. (p.16) The classic notion at work is that consciousness, or individual conscious mental states, can be explained in terms of something else or in some other terms. It is worth mentioning that one prominent and influential model of reduction treats it as a form of explanation (Kemeny and Oppenheim 1956). Ney (2008) explains that “reductionists are those who take one theory or phenomenon to be reducible to some other theory or phenomenon. For example, a…reductionist about biological entities like cells might take such entities to be reducible to collections of physico-chemical entities like atoms and molecules.” Explanation is certainly the ultimate goal of a reductionist theory of consciousness; that is, we want to explain what makes a mental state conscious.

Although Kemeny and Oppenheim had eliminativist leanings, one need not go that far in applying their model to consciousness. We can and should acknowledge that there really are conscious mental states, but also aspire to show that they can be explained in terms of a “base theory” devoid of consciousness-laden terms. Similarly, although their model of reduction employs the notion of reducing one theory to another, we can extend the idea to explaining entities, events, or phenomena such as conscious mental states. The familiar and successful example of explaining life in biological or cellular terms reminds us that such a reduction is not only possible but desirable.

Another reason to favor a reductionist approach is simply because non-reductive theories seem primarily motivated by the perceived lack of a plausible reductionist alternative. That is, it often seems to me that nonreductive accounts are mainly default positions stemming from the (correct or incorrect) conclusion that a given reductionist approach has failed. In some ways, antireductionism results from giving up on a reductionist approach. However, it would still seem odd to treat nonreductionism as an equally plausible explanation if there were also a viable reductionist account. And, of course, I view HOT theory as offering just such an account. It is hard to imagine that someone would adhere to a nonreductive approach just for its own sake. Are there, for example, any nonreductionists about life anymore?

With regard to explaining consciousness, however, we must distinguish between those who attempt such a reduction directly in physicalistic, such as neurophysiological, terms and those who do so using mentalistic terms, such as unconscious mental states or other cognitive notions. As I mentioned earlier, representationalists favor the latter strategy. I agree with Carruthers that those who currently attempt to reduce consciousness more directly in neural or physical terms “leap over too many explanatory levels (p.17) at once.” (2005, 6). This is a point missed by Hardcastle (2004), for example, who mistakenly supposes that HOT theorists are chiefly motivated by the alleged nonreductionist divide between mind and brain or by some inherently mysterious explanatory gap (Levine 1983). Hardcastle also fails to appreciate that HOT theorists are very much open to a later second-step reduction to the neurophysiological, a point made by Rosenthal on several occasions.

Another general reason for a mentalistic approach is to blunt the force behind the so-called multiple realizability of conscious states. The idea here is that it seems perfectly possible for there to be other conscious beings, such as aliens or radically different animals, who can have those same kinds of mental states but be extremely different from us physiologically. It seems that commitment to a “type-type” identity theory, the view that mental state types (or properties) are identical with neural properties, leads to the undesirable result that only organisms with brains like ours can have conscious states (Fodor 1974). Thus most materialists wish to leave room for the possibility that mental properties can be “instantiated” in different kinds of organisms. Type-type identity theory is the very strong thesis that mental properties, such as “having a desire to drink some water” or “being in pain,” are literally identical with a brain property of some kind. Such identities were originally meant to be understood as being on a par with, for example, the scientific identity between “being water” and “being composed of H2O” (Place 1956; Smart 1959), but this failed to acknowledge the multiple realizability of mental states. So I take it that one advantage of HOT theory is that it is not committed to any direct reduction of consciousness to neural activity. Nonetheless HOT theorists are typically still materialists who desire to show how HOT theory might be realized in our brains.

2.2.2 Gaps, Zombies, and Phenomenal Concepts

Some philosophers have argued that there is a potentially permanent explanatory gap between our understanding of consciousness and the physical world (Levine 1983, 2001) and that we do not, or even cannot, understand how consciousness arises from brain activity (Chalmers 1995). If they are correct, then there could not be an ultimately successful reductionist account of consciousness.

McGinn (1991), for example, goes so far as to argue that we are not cognitively equipped to understand how consciousness is produced by the brain. We are “cognitively closed” with respect to the mind–body problem much as a rat or dog is cognitively incapable of solving, or even understanding, calculus problems. McGinn concedes that some brain property (p.18) produces conscious experience, but we cannot understand how it does so, and we cannot come to know what that brain property is. Our concept-forming mechanisms will not allow us to grasp the physical and causal basis of consciousness. McGinn does not entirely rest his argument on past failed attempts at explaining consciousness in physical terms. Instead he presents a distinct argument for his pessimistic conclusion. McGinn observes that we do not have a mental faculty that can access both consciousness and the brain. We access consciousness through introspection, but our access to the brain comes through outer spatial senses. Thus we have no way to access both the brain and consciousness together, and therefore any explanatory link between them is forever beyond our reach.

Finally, an appeal to the possibility of zombies is also sometimes taken both as a problem for materialism and as a more positive argument for some form of dualism, such as property dualism. The philosophical notion of a “zombie” refers to conceivable creatures that are physically indistinguishable from us but lack consciousness entirely (Chalmers 1996). It certainly seems logically possible for such creatures to exist: “The conceivability of zombies seems…obvious to me.…While this possibility is probably empirically impossible, it certainly seems that a coherent situation is described; I can discern no contradiction in the description” (Chalmers 1996, 96). Philosophers often contrast what is logically possible (in the sense of “that which is not self-contradictory”) from what is empirically possible given the actual laws of nature. Thus it is logically possible for me to jump fifty feet in the air, but not empirically possible. The objection, then, typically proceeds from such a possibility to the conclusion that materialism is false because it would seem to rule out that possibility. It has been fairly widely accepted (since Kripke 1972) that all identity statements are necessarily true (that is, true in all “possible worlds”), and the same should therefore hold for mind–brain identity claims. Since the possibility of zombies shows that mind–brain identity claims do not, then we should conclude that materialism is false.

Some philosophers explicitly draw antimaterialist and antireductionist conclusions from these considerations (Chalmers 1996), while others do not view them as a threat to the metaphysics of materialism (McGinn 1991; Levine 2001). Either way, however, I think there is a plethora of plausible replies to the foregoing lines of argument that would take me too far afield from my main topic.3

I do, however, wish to pause to address one influential reply that involves a claim about a special class of concepts called phenomenal concepts (Loar 1990, 1997). Phenomenal concepts are recognitional concepts. To have (p.19) the phenomenal concept of blueness is to be able to recognize experiences of blueness while having them. The recognitional concept of blueness refers directly to its referent (the physical property of blueness), so there is no other property involved in the reference fixing. Phenomenal concepts are indexical or demonstrative concepts applied to phenomenal states via introspection (Lycan 1996). Carruthers, for example, describes purely recognitional concepts as those “we either have, or can form…that lack any conceptual connections with other concepts of ours, whether physical, functional, or intentional. I can, as it were, just recognize a given type of experience as this each time it occurs, where my concept this lacks any conceptual connections with any other concepts of mine—even the concept experience” (2005, 67).

According to Loar, Carruthers, and others, these concepts mislead us into thinking that any alleged explanatory gap is deeper and more troublesome than it really is. Ironically, it is perhaps McGinn’s own observation about our two distinct concept-forming mechanisms that is used to blunt the force of the problems just described. Given our possession of phenomenal concepts, Loar and others reply that any alleged explanatory gap or lack of identity between the mental and physical can be explained away. If we possess purely recognitional concepts of the form “This type of experience,” we will always be able to have that thought while, at the same time, conceiving of the absence of any corresponding physical or intentional property. On the one side, we are using scientific third-person concepts, and on the other, we are employing phenomenal concepts. We are, perhaps, simply not in a position to understand completely the connection between the two, but the mere possibility of, say, zombies is explained away in a manner that is harmless to materialism. It may be that there is a good reason why such zombie scenarios seem possible, namely, that we do not (at least not yet) see what the necessary connection is between neural events and conscious mental events.4

For my own part, I am not quite convinced that there are phenomenal concepts, at least in the way they are often defined. First, it is unclear that HO theorists need to invoke them to provide a reductionist account of consciousness in mentalistic terms. The so-called phenomenal concept strategy is primarily used by those who wish to reduce consciousness to something expressed in overtly physical terms. As we have seen, this is not the strategy of a HO theorist.

Second, it is not clear to me that there are any concepts that have no “conceptual connections with other concepts, whether physical, functional, or intentional,” as Carruthers puts it. It seems to me that even such (p.20) alleged recognitional or indexical concepts have at least some relation to other concepts possessed by the subject even if they are not concepts framed in physicalistic terms. Rosenthal shares my skepticism: “Even when we recognize something without knowing what type of thing it is, we always can say something about it” (2005, 207). At minimum, there would seem to be many comparative concepts involved in any such description, such as when one sees a darker or lighter shade of a color than has been seen up to that point.

Third, I suppose that one could think of HOTs as indexical or demonstrative thoughts and thus akin to phenomenal concepts in this respect. The idea would be to think of HOTs as having the form “I am in this mental state” or “This is the mental state I am in,” since “I” and “this” are demonstratives and indexicals.5 But I fail to see the advantage of this approach over standard HOTs of the form “I am in mental state M.” Perhaps “I am in this mental state” is less conceptually sophisticated, which might help with respect to the Animals and Infants Theses, but there are still the concepts “I” and “mental state” as constituents of those thoughts. Moreover, I take the fact that there are concepts in the HOTs to be an advantage of HOT theory over, say, HOP theory, for reasons we will see in later chapters.

Perhaps most important for those who do advocate reductionism in purely physical terms, however, is simply recognizing that different concepts can pick out the same property or object in the world. Out in the world there is only the one “stuff,” which we can conceptualize either as “water” or as “H2O.” Recall again the Fregean distinction between meaning (or “sense”) and reference. Two concepts can have different meanings but refer to the same property or object, much like “Venus” and “the Morning Star.” Materialists, then, explain that it is essential to distinguish between mental properties and our concepts of those properties. By analogy, there are phenomenal concepts that employ a phenomenal property to refer to some conscious mental states, such as a sensation of red. In contrast, we can also use concepts couched in physical or neurophysiological terms to refer to that same mental state from the third-person point of view. There is thus only one conscious mental state conceptualized in two different ways: either by employing first-person experiential phenomenal concepts or by employing third-person neurophysiological concepts. It may then just be a “brute fact” about the world that there are such identities, and the appearance of arbitrariness between brain properties and mental properties is just that—an apparent problem leading many to wonder about the alleged explanatory gap. Qualia could then, after all, be identical to physical properties. Moreover, this response provides a diagnosis for why there even (p.21) seems to be such a gap, namely, that we use very different concepts to pick out the same property. With respect to the more general issue of reduction, however, I think that Carruthers (2005, chap. 2) and Block and Stalnaker (1999) rightly criticize the notion that a priori conditionals between the physical and mental are required for a successful reduction, at least for most standard models of explanation (Chalmers and Jackson 2001). I return to this matter in chapter 4.

In any case, I think it is best to adopt what we might call methodological reductionism, whereby we attempt, as a matter of strategy or method, to reduce consciousness to intentionality (or something cognitive) unless it is clearly impossible. It is not time to give up. How can success for such a strategy be ruled out a priori or so soon? It seems premature to declare that any kind of successful reduction is forever hopeless. Of course, there are philosophers who believe more specifically that intentionality itself entails or involves consciousness, which would then make such a reduction impossible. It is to this issue that I now turn.

2.3 Consciousness and Intentionality

The relationship between intentionality and consciousness is itself a major ongoing area of dispute, with some arguing that genuine intentionality actually presupposes consciousness in some way (Searle 1992; Siewart 1998; Horgan and Tienson 2002; Pitt 2004; Georgalis 2006). One way to frame the issue is in terms of the question “Does mentality entail consciousness? (Gennaro 1995). Notice that an affirmative answer results in a very strong claim; that is, having intentional states (such as beliefs, thoughts, and desires) entails having conscious states. I argue that this is much too strong.

2.3.1 Searle’s Connection Principle

It will be useful first to critically examine Searle’s well-known and controversial Connection Principle (1992, 132) in support of the entailment claim. It says:

(CP) Every unconscious intentional state is at least potentially conscious.

Searle similarly tells us that the “notion of an unconscious mental state implies accessibility to consciousness” (152). Much of Searle’s argument for CP rests on the notion that every intentional state has “aspectual shape,” which can ultimately be accounted for only via consciousness. The idea is that genuine intentional content must ultimately “seem” a certain way to a creature and so presumably involves a conscious first-person point of view. (p.22) This is largely because Searle thinks that this is the only way to account for the intensionality (with an s) of intentional states. For example, if a person P has the (unconscious) belief that there is water in the pool, P must be able to conceive of that substance under the aspect of “water” (as opposed to, say, H 2O). But since only conscious intentionality is intrinsically aspectual, the idea of an unconscious intentional state is parasitic on the conscious variety.

It is indeed widely accepted that intensionality is a mark of intentional states. The idea is that substituting co-referring terms in a statement does not necessarily preserve truth value. A four-year-old child (who knows nothing about chemistry) can know or believe that there is water in the pool, but it would be false to say that she knows or believes that there is H2O in the pool. Searle’s claim, however, is that for there to be unconscious aspectual shape, it must be possible for the organism to have intrinsic aspectual shape. And intrinsic aspectual shape can only arise with reference to a conscious point of view. So what distinguishes an unconscious mental state from other neural happenings is that it is potentially conscious.

Nonetheless, numerous decisive objections to CP have been raised over the years.6 I review some here.

First, the notion of “potential” at work in CP must obviously not be a logical or metaphysical possibility. That would surely be too strong. Thus nomologically possible or psychologically possible seems much more reasonable. But then if we take CP literally, Searle faces the problem that it mistakenly rules out a host of abnormal psychological phenomena, such as deeply repressed states or any unconscious state that could not in fact become conscious owing to brain lesions and the like (Rosenthal 1990).

Second, there seems to be no way for CP to acknowledge intentional states that occur via some forms of perceptual processing. For example, there would seem to be two visual pathways in the brain (Milner and Goodale 1995). Visual processing along the ventral stream pathway is conscious. But visual processing also occurs along the dorsal stream visual pathway, which generates representations not accessible to consciousness. The dorsal stream functions more like an unconscious (and very fast) visual motor system that causes the relevant behavior due to systematic tracking relations with the environment. One might deny that dorsal-stream representations are genuinely intentional, but this would be an extremely odd line to take.

Third, CP seems to entail what Shani calls a “denial of gradualism,” whereby converging lines of empirical evidence show that “the evolution of subjectivity is a gradual process manifesting various levels of ascending (p.23) complexity, each serving as a platform for the emergence of…subjective existence” (2007, 59; see also Shani 2008). As is evidenced by the previous objection, perhaps there are lower animals (such as lizards and rodents) that only have the dorsal-stream visual processing. This seems likely on at least some level of evolutionary development. I fail to see any reason, however, to hold that such animals cannot have any genuinely contentful intentional states (including perceptual states) unless those states could also be conscious. At the least, it seems possible for such an organism to exist. We can and should allow for degrees of intentionality and understanding of the environment.

Fourth, another way to approach the matter is by answering the following question: Can significant explanatory power be achieved by making intentional attributions without attributions of consciousness? It seems to me that the answer is clearly yes, as the animals’ case in the previous paragraph shows. We would, I suggest, still rightly attribute all unconscious intentional states to such animals. Would or should we withdraw intentional attributions to an animal if we later come to agree that it is not conscious? I don’t think so. Such attributions are useful in explaining and predicting animal behavior, but it does not follow that they have merely “as-if” intentionality. In some cases, we may not know if they are conscious. The same, I suggest, would hold for advanced robots. This is not necessarily to embrace some kind of antirealist Dennettean “intentional stance” position (Dennett 1987). For one thing, we might still agree that those systems have genuine internal mental representations.

Finally, the foregoing considerations show us how to challenge more directly Searle’s central premise that there cannot be intrinsic unconscious aspectual shape. Searle thinks that genuine cases of aspectual shape and intensionality cannot be revealed from mere third-person evidence (behavioral or otherwise). For example, he would presumably hold that no third-person evidence could ever justify an attribution of a belief about water as opposed to a belief about H2O. But surely a counterexample is possible. For example, if an unconscious robot displays enough sophisticated behavior that it systematically locates and recognizes a bottle labeled “water” as opposed to bottles labeled “H2O” (among many other water-related behaviors), then we may be warranted in attributing to it the former belief (that is, the belief about where the bottle of water is). Even Searle recognizes that one can have, say, a desire for water and not have a desire for H 2O, though water and H2O are the same. His mistake, however, is to suppose that nothing short of a first-person subjective point of view can justify the attribution of one state but not the other (Van Gulick 1995a,b).

(p.24) To be fair, however, Searle’s line of argument does raise a genuine challenge for all naturalistic (or reductionist) theories of mental content, namely, just how to specify or determine intentional contents without a first-person or subjective point of view. One problem raised by Searle is that third-person evidence always leaves the aspectual shape underdetermined to some extent (Searle 1992, 158, 163–164). Or, as Quine (1960) might put it, there would be indeterminacy of intentional content without the first-person evidence.

Several replies are in order here. (1) If the above robot-bottle story makes any sense at all, it is not clear that all such intentional content must be undetermined or underdetermined. Under certain conditions, it at least seems possible to attribute all unconscious intentional states to a system. (2) In some ways, then, Searle simply begs the question against naturalistic theories of content. He is right to demand that his opponent offer a workable theory along these lines, but to rule out success up front again seems premature. Moreover, some of us are not entirely uncomfortable with a theory of content that allows for some degree of indeterminacy if it has other theoretical advantages. (3) Searle seems to think that determinacy can be gained in a straightforward way once we include the first-person point of view. But is this so obvious? The real force behind Quine’s position, I take it, is that even the first-person point of view does not always fix what we mean by a term or concept. It is not always obvious just what I mean by “water” or “rabbit.” Introspective evidence, while important and often reliable, is not infallible and does not always lead to determinacy of content. Does such evidence really tell me whether or not I mean “undetached rabbit parts” when I think about a “rabbit”?

Another important question can be put as follows: what makes a state a mental state (as opposed to, say, a mere information-carrying state)? This question can surely be answered without invoking consciousness at all. One option is to hold that the creature in question must have complex-enough behavior such that simple mechanistic explanations are not sufficient to explain its behavior. More positively, we might demand that creatures or systems display a significant degree of inferential integration (or “promiscuity”) among their intentional states (Stich 1978). The contents of, say, beliefs and desires are interconnected in various ways; thus, beliefs and desires acquire their content within a web or network of beliefs. So, for example, the more “informationally encapsulated” a state is (Fodor 1983), such as in early visual processing, the less likely it is to count as a mental state.

These considerations can also be used in response to the slippery-slope argument that any attempt to explain intentionality that detaches it from (p.25) consciousness leads to the absurd conclusion that intentionality would then be everywhere (Searle 1992, 1995; Strawson 2004). Stomachs would have mental lives, and water really tries (that is, “desires”) to get to the bottom of the hill. Once again, these absurd implications can be blocked by recognizing that stomachs and rivers do not meet the criterion above, namely, that there is no significant degree of inferential connections among their states. Moreover, attributing intentionality to stomachs and rivers does not add any explanatory value to a purely mechanistic (or informational) account. In conclusion, then, I think that CP is false.

Of course, the general claim that “mentality entails consciousness” remains ambiguous. There are numerous interpretations depending on which kinds of mental states are at issue, as well as whether or not we are concerned with state or creature consciousness.7 I think most interpretations are false, but let us briefly consider the following two:

  1. (1) A creature or system cannot have all unconscious beliefs and desires (or “goals”).

  2. (2) A creature or system cannot have all unconscious pains, frustrations, or sufferings.

As I have argued, I think that (1) is false, but (2) might very well be true. For (1), the system or creature might be utterly unconscious but have such intentional states, whereas in (2) a creature would arguably have to be conscious to have any genuine pains or sufferings. Perhaps the difference lies in the fact that some intentional states, such as beliefs, are best understood as dispositions to behave in various ways. On the other hand, (2) does seem true to me. It at least seems much more reasonable to claim that even if there are individual unconscious pains and (perhaps) frustrations, we would likely not attribute such states to a creature if we believed that it was not conscious at all. It seems odd to talk about the frustrations, sufferings, or pains of an utterly unconscious creature or robot. The reason for this is perhaps that our very concept of “suffering” or “pain” is more closely tied to consciousness. Unlike Searle, however, the connection here is not one of state consciousness but rather one of overall creature consciousness. That is, for example, I hold not that each individual pain must be potentially conscious but that attributions of unconscious pains make sense only if we also think that the creature in question is conscious.

2.3.2 Phenomenal Intentionality

Reductive representationalists hold that intentionality is separable from consciousness, a view that Horgan and Tienson (2002) reject and call (p.26) separatism. They argue for what is called phenomenal intentionality or “cognitive phenomenology.” One rationale for separatism is to make a reductionist explanation of consciousness possible. But if intentionality is deeply intertwined with consciousness, then a reductionist explanation would be difficult or perhaps even impossible to obtain. And some argue that beliefs, desires, and other intentional states themselves have phenomenology.

Horgan and Tienson distinguish the Intentionality of Phenomenology (IP) from the Phenomenology of Intentionality (PI). They state PI as follows:

(PI) “Mental states of the sort commonly cited as paradigmatically intentionalwhen conscious, have phenomenal character that is inseparable from their intentional content” (2002, 520; italics mine).

In addition they advocate the claim that “there is a kind of intentionality, pervasive in human mental life, that is constitutively determined by phenomenology alone” (520; italics mine).

Although Horgan and Tienson’s purpose is not explicitly to reject reductive representationalism, the impression given is that PI is a threat to reductionism or naturalism. However, a careful reading of the foregoing quotations reveals that PI is compatible with reductionism and consistent with a negative answer to the question “Does mentality entail consciousness?” The main issue, as I see it, is their starting point, namely, the first-person human point of view. They primarily have in mind paradigmatic human cases of intentional states, which they argue involve phenomenology. So, for example, there is something it is like for us to think that rabbits have tails, believe that ten plus ten equals twenty, or desire Indian food. The consciousness in question is presumably not merely accompanying associated images of rabbits or food (Lormand 1996) but rather intrinsic to the intentional states themselves. But it still does not follow that intentionality per se entails consciousness or phenomenology, as we have already seen in the previous subsection. There may be some intentional states that could not become conscious or even an organism (or robot) with all unconscious intentional states.8

Moreover, Horgan and Tienson often seem more concerned with the viability of narrow content than with the separability of intentionality and consciousness. But as far as I can see, believing in narrow content is also not inconsistent with reductionism (Carruthers 2000, 2005). Like Carruthers, I also hold that there is narrow content. This combination of views may not be typical among representationalists, but it is hardly inconsistent.

We should also distinguish, as Horgan and Tienson do, the phenomenology of attitude type (desires, thoughts, beliefs, wonderings, etc.) from the (p.27) phenomenology of content (the same attitude but with different content). I raise three points here:

  1. (1) I am inclined to agree that there is phenomenal intentionality for most intentional attitude types. It does indeed seem right to hold that there is something it is like to think that rabbits have tails, believe that ten plus ten equals twenty, or have a desire for some Indian food. But, again, this is no threat to reductionism, because a representationalist can simply agree that those kinds of mental states need to be added to the list of conscious mental states for which we need an explanation. For example, a HOT theorist might accept that one’s thoughts or hopes become conscious when a suitable (unconscious) HOT is directed at it. There is little reason to resist the idea that my (conscious) desire to write a good book or my (conscious) thought that I am on sabbatical has a phenomenological aspect. But this does not imply that each individual intentional state is actually or potentially conscious.

  2. (2) It seems to me, however, that there is something importantly different about beliefs and knowledge, on the one hand, and desires, wonderings, and thoughts, on the other. Beliefs and knowledge seem to be purely dispositional states, in contrast to, say, occurrent episodes of thinking. In the former case, I think what we really have in mind are cases of consciously introspecting our beliefs or knowledge so that the objects of conscious thoughts are conscious. Is there something it is like to believe, as opposed to think about, the cat in the tree? I don’t think so. Thus it is not even clear that there are first-order conscious beliefs or knowledge at all (Gennaro 1996, 36–43).

  3. (3) It is also doubtful that there is a different phenomenology for every change in content. For example, let us agree that there is a phenomenological difference between thinking about a one-thousand-sided figure and thinking about a four-sided figure. But it still seems wrong to hold that there is a phenomenological difference between thinking about a 999-sided figure and a 998-sided figure. Is there a phenomenological difference between wondering whether a distant star is 800 light-years away or 850 light-years away? Just how fine grained can contents be such that there is a phenomenological difference? One can easily generate an infinite number of different contents for each single attitude type, but it seems unlikely that there is a phenomenological difference for each pair.

Finally, it is worth remembering that in HOT theory (or something close to it), consciousness entails intentionality, but not vice versa. However, an appropriate representation of a representation does entail consciousness and is constitutive of it. I now turn to a preliminary defense of HOT theory.

(p.28) 2.4 HOT Theory: An Initial Defense

In this section, I offer a preliminary defense of HOT theory. I ask the reader for some patience as a more thorough defense and additional details of my own theory will become clearer throughout the book.

2.4.1 The Transitivity Principle

It is natural to start with the highly intuitive claim that has come to be known as the Transitivity Principle (TP). One motivation for HOT theory is the desire to use this principle to explain what differentiates conscious and unconscious mental states:

  1. (TP) A conscious state is a state whose subject is, in some way, aware of being in it (Rosenthal 2000a, 2005).9

Thus, when one has a conscious state, one is aware of being in that state. For example, if I am having a conscious desire or pain, I am aware of having that desire or pain. HOT theory says that the HOT is of the form “I am in M now,” where M references a mental state. Conversely, the idea that I could be having a conscious state while totally unaware of being in that state seems very odd (if not an outright contradiction). A mental state of which the subject is completely unaware is clearly an unconscious state. For example, I would not be aware of having a subliminal perception, and thus it is an unconscious perception. I view the TP primarily as an a priori or conceptual truth about the nature of conscious states. It is interesting to note that many non-HOT theorists agree with the TP, especially those who endorse some form of self-representationalism according to which conscious mental states are also directed back at themselves in some sense.10

One can also find a similar claim in Lycan’s (2001a) argument where premise (1) just is the TP. Moreover, he treats it as a “definition,” which suggests that it is a conceptual truth. The entire argument runs as follows:

  1. (1) A conscious state is a mental state whose subject is aware of being in it.

  2. (2) The “of” in (1) is the “of” of intentionality; what one is aware of is an intentional object of the awareness.

  3. (3) Intentionality is representational; a state has a thing as its intentional object only if it represents that thing.

Therefore,

  1. (4) Awareness of a mental state is a representation of that state. (From 2, 3)

Therefore,

  1. (5) A conscious state is a state that is itself represented by another of the subject’s mental states. (1, 4)

(p.29) I should say that Lycan’s argument does not necessarily support HOT theory as opposed to his favored HOP theory, but I will argue against HOP theory in the next chapter. Moreover, the argument does not, strictly speaking, rule out a self-representational account because (5) does not necessarily follow from (1) and (4). For example, a self-representationalist will say that the representing state need not be distinct from the represented state (Gerken 2008). To be fair to Lycan, however, much of the work on self-representationalism referenced in this book occurred after his 2001a piece was published. In addition, Lycan clearly intended to be arguing for a reductive representational account, which is typically not the self-representational view. Thus Lycan’s argument might be too simple, but it can be supplemented by additional argumentation. HOT theorists often employ an “argument by elimination” strategy against various other theories of consciousness (Carruthers 2000; Rosenthal 2004).

One might object that many HO theorists hold that the TP is an empirical (as opposed to an a priori) claim. Indeed, Rosenthal himself says, “The theory doesn’t appeal to, nor is it intended to reflect, any conceptual or metaphysically necessary truths” (2005, 9). But he also refers to the TP as a “truism” (8), which seems to suggest that it is a conceptual, or at least “folk psychological,” truth of some kind. Rosenthal also often asserts the “intuitively obvious” truth of TP and seems to use a priori reasoning in various places. Bill Lycan has also told me, in e-mail correspondence, that he wonders if HO theories are “nearly trivially true.” In any case, if I differ from other HO theorists on the extent to which HO theory is a conceptual truth or is known a priori, then so be it.

There is also an importantly related issue here. If “an empirical claim” means “in principle empirically falsifiable” or “consistent with and sometimes supported by empirical and scientific evidence,” then I certainly agree that HO theory is empirical. A conceptual or necessary truth might also be empirical in the sense that it can sometimes also be supported or falsified by empirical evidence. We might claim to know that some proposition is true a priori but then come across empirical findings that falsify it. Indeed, this happens often in philosophy of mind when facts about abnormal psychological phenomena call into question what seem to be obvious conceptual truths, such as when the existence of Anton’s syndrome (blindness denial) forces us to doubt the view that we cannot be mistaken about our ability to see. Another case would be falsifying what Descartes surely took to be conceptually true, namely, a kind of “self-intimation” thesis that denies the very possibility of unconscious mental states and says that if one has a mental state, then one knows that one is in it. In such cases, we typically later conclude that these propositions were not really known in the first place.

(p.30) 2.4.2 Other Aspects of HOT Theory

Another central motivation for HOT theory is that it purports to help explain how the acquisition and application of concepts can transform our phenomenological experience. Rosenthal invokes this idea with the help of several well-known examples (2005, 187–188). For example, acquiring various concepts from a wine-tasting course will lead to different experiences from those enjoyed before the course. I acquire more fine-grained wine-related concepts, such as “dry” and “heavy,” which in turn can figure into my HOTs and thus alter my conscious experiences. As is widely held, I will literally have different qualia due to the change in my conceptual repertoire. As we learn more concepts, we have more fine-grained experiences and thus experience more qualitative complexities. Conversely, those with a more limited conceptual repertoire, such as infants and animals, will have a more coarse-grained set of experiences. Much the same goes for other sensory modalities, such as the way that I experience a painting after learning more about artwork and color. These considerations do not, of course, by themselves prove that newly acquired concepts are constitutive parts of the resulting conscious states, as opposed merely to having a causal impact on those states. Nonetheless, I will argue in subsequent chapters that it is more plausible to suppose that concepts are indeed constitutive parts of conscious states because it is better to construe (unconscious) HOTs as intimately bound up with the lower-order states.

Let us also consider a common initial objection to HOR theories, namely, that they are circular and lead to an infinite regress. For example, it might seem that HOT theory results in circularity by defining consciousness in terms of HOTs. It might also seem that an infinite regress results because a conscious mental state must be accompanied by a HOT, which must in turn be accompanied by another HOT, ad infinitum. However, the standard reply is that when a conscious mental state is a first-order world-directed state, the HOT is not itself conscious; otherwise circularity and an infinite regress would follow. When the HOT is itself conscious, there is a yet-higher-order (or third-order) thought directed at the second-order state. In this case, we have introspection, which involves a conscious HOT directed at an inner mental state. When one introspects, one’s attention is directed back into one’s mind. For example, what makes my desire to write a good book a conscious first-order desire is that an unconscious HOT is directed at the desire. In this case, my conscious focus is directed at the book and my computer screen, so I am not consciously aware of having the HOT from the first-person point of view. When I introspect that desire, however, I then have a conscious HOT (accompanied by a yet higher, third-order, HOT) directed at the desire itself (Rosenthal 1986, 1997). Figure 2.1 is one way to illustrate HOT theory. (p.31)

In Defense of the HOT Thesis

Figure 2.1 The structure of conscious mental states according to the HOT theory of consciousness.

(p.32) Another related and compelling rationale for HOT theory and the TP is as follows (based on Rosenthal 2004, 24): A non-HOT theorist might still agree with HOT theory as an account of introspection or reflection, namely, that it involves a conscious thought about a mental state (Block 1995). This seems to be a fairly common sense definition of introspection that includes the notion that introspection involves conceptual activity. It also seems reasonable for anyone to hold that when a mental state is unconscious, there is no HOT at all. But then it stands to reason that there should be something “in between” those two cases, that is, when one has a first-order conscious state. So what is in between no HOT at all and a conscious HOT? The answer, of course, is an unconscious HOT, which is precisely what HOT theory says. Moreover, this explains what happens when there is a transition from a first-order conscious state to an introspective state: an unconscious HOT becomes conscious.11

HO theorists further agree that the HO state must become aware of the LO state noninferentially. We might even suppose, say, that the HO state must be caused noninferentially by the LO state to make it conscious. The point of this condition is mainly to rule out alleged counterexamples to HO theory, such as cases where I become aware of my unconscious desire to kill my boss because I have consciously inferred it from a session with a psychiatrist, or where my envy becomes conscious after making inferences based on my own behavior. The characteristic feel of such a conscious desire or envy may be absent in these cases, but since awareness of them arose via conscious inference, the HO theorist accounts for them by adding this noninferential condition.

Finally, it is worth mentioning that there is no reason in principle to rule out the possibility of experimental data supporting HOT theory and, in particular, the continuous presence of unconscious HOTs. Despite her scathing but somewhat misdirected criticism of HOT theory, Hardcastle (2004, 290–294) suggests that the ubiquitous presence of unconscious HOTs could find empirical support via a modified priming task. There is no reason why some of the methods used to indicate the presence of unconscious first-order mental states could not, if suitably modified, also be used to indicate the presence of unconscious HOTs. For example, one well-known method is known as subliminal priming, which refers to the effects on subsequent behavior of stimuli that are not consciously detected (Marcel 1983). Unconscious mental processes can influence our conscious mental states.

For example, Jacoby, Lindsay, and Toth (1992) briefly presented completed words before presenting a target word stem, such as presenting RESPOND followed by ___OND. But then subjects were told not to use the (p.33) completed word in suggesting that it would complete the stem. Subjects would also be primed unconsciously to give the flashed word although they were instructed to disregard it. In such an opposition condition, subjects would take longer to answer questions for which they had just been primed with an answer that they could not use. But when they were told to use the completed word, priming would work to their advantage. Their reaction times should be shorter. By comparing response times between these two conditions, as well as their respective error rates, we get some idea of the influence that unconscious states can have on their conscious answers.

Hardcastle suggests that we “can and should use a similar methodology to determine whether we have unconscious HOTs…co-active with any conscious states.…We need a priming task that would test whether we can recognize that we were aware of a series of target conscious events faster or with fewer errors than other aspects of the same events. If we can, then that would be some evidence that we are unconsciously aware that we are aware” (2004, 292). She gives an example of one possible experiment. We flash a series of simple scenes (such as a cat on a mat or a dog with a bone) for a half second or so, long enough to reach consciousness. Each scene is then replaced by the same masking stimulus, which prevents subjects from studying the stimulus. We can then ask about their conscious experience (did you see a bone?) or about the scene (was the dog next to the bone?). With appropriate controls in place, if we have unconscious HOTs “accompanying all conscious experiences, then HOTs should prime our behavior with regard to reacting to the fact that we are conscious” (292), and we should answer the former questions (about conscious experience) with fewer errors than the latter (about the scene). To my knowledge, however, these kinds of experiments have not been done to date. Aside from this specific suggestion, there should be some way to design experiments that could serve as experimental evidence for or against HOT theory.

2.5 More on Mental Content

Now that we have established a prima facie case for HOT theory, let us return to mental content. In the end, I think that a HOT theorist could be relatively neutral with respect to theories of mental content. It is not clear that a HOT theorist must be wedded to any particular theory of content. Nonetheless it is fair to ask any proponent of HOT theory just where one’s sympathies lie, and some of the details might also affect how one handles various objections. Three areas need to be addressed:

  1. (p.34) (1) The first has to do with exactly which theory of content is preferred. That is, just how do mental representations and their constituent concepts acquire their content (or “meaning”)? What determines their content? Many such theories are on offer. Perhaps the most common division among naturalistic views is between causal-informational (Stampe 1977; Dretske 1981, 1988; Fodor 1981, 1990) and functional theories (Block 1986; Harman 1973). Causal-informational theories hold that the content of a mental representation is grounded in the information it carries about what does, or would, cause it to occur. Mental states acquire their content by standing in appropriate causal relations to objects and properties in the world. The basic idea is that, say, thoughts about dogs are about dogs, and mean “dog,” because dogs cause the thoughts that our minds use to keep track of dogs. Functional theories hold that the content of a mental representation is grounded in its causal or inferential relations to other mental representations. My preference is with causal theories, though they are also sometimes supplemented in various ways, such as by teleological or biological considerations.12

Causal theories, however, do face some well-known difficulties. For example, a very crude causal theory cannot be sufficient for specifically mental content, not to mention conscious content. For one thing, causal relations abound where no mentality exists at all, such as with tree rings and thermostats. Perhaps most important is the disjunction problem, which shows that a simple causal story cannot properly isolate the correct causal relation. A horse might normally cause the mental tokening of the concept “horse,” but why not “saddle” instead? We thus encounter the related possibility and problem of misrepresentation, which any theory of representation should recognize. Perhaps cows (say, not seen in proper lighting) sometimes cause mental representations of “horse.” How is this explained? Does “horse,” then, represent either cows or horses? Getting the extension of a mental representation right is paramount for any theory of content. It should be noted that we are mainly concerned with empirical objects and properties.

I will not pretend to have a novel solution to these ongoing disputes. Clever attempts to solve these problems from the likes of Dretske and Fodor have left many dissatisfied. For example, Dretske posits a learning period during which mental content is fixed. Once the learning period ends, it is then possible for the mental representation to be misapplied to (and thus to misrepresent) the corresponding object or property. Although this overall strategy may be right in some regard, it has been met with significant criticism (Slater 1994; Prinz 2002). For example, it is well known that children overgeneralize their concepts during the learning process itself.

(p.35) Fodor (1987, 1990) puts forth an asymmetric dependence theory based on the observation that informational relations depend on representational relations, but not vice versa. An important asymmetry is at work here. For example, if mental representations (or tokens of a mental state type) are reliably caused by horses and cows-on-dark-nights, then they also carry information about all those objects. If, however, the mental representation “horse” is tokened in response to a cow on a dark night, this tokening depends on the more fundamental relation between horses and horse representations. In other words, if it were not the case that horses caused “horse” concepts or mental representations, then cows would not token “horse” either. Thus the content-determining causes are more fundamental in an important sense.

For my money, the best attempts to handle these problems can be found in the related work of Rupert (1999) and Prinz (2002). They build on Dretske’s notion of a learning period but appeal to the actual history of causal interactions between a mental representation and what it represents. Rupert offers a modified causal view, at least for natural kind terms, called the best test or causal-developmental theory, according to which there is an actual history requirement for a mental representation to acquire its content accurately. The basic idea is that content is determined by a substantive developmental process shaped by a subject’s developmental interaction with the environment. A mental representation R “has as its extension the members of natural kind K if and only if members of K are more efficient in their causing of [R] in S than are the members of any other natural kind” (Rupert 1999, 323; italics mine). The notion of “efficiency” is cashed out in terms of numerical comparisons between the past relative frequencies (PRFs) of certain causal interactions (cf. Usher 2001).

So in response to the disjunction problem, the idea is that although every cat is a mammal, the PRF of cats relative to the concept “cat” is much higher than mammals relative to that concept. Only PRFs resulting from a substantial number of interactions matter. With respect to the earlier example, the concept “horse” will not represent cows because that concept will be caused much more frequently by horses. It is the success rates (that is, the percentage) of object or property to mental representation R that determine content, not necessarily the most common stimulant of R. Similar considerations explain misrepresentation after a concept is acquired.

In a somewhat related manner, Prinz (2002, 250) urges that the “intentional content of a concept is the class of things to which the object(s) that caused the original creation of that concept belong.” Again, what matters (p.36) is the actual causal history of a concept. More specifically, mental content is “identified with those things that actually caused the first tokenings of a concept (what I call the ‘incipient causes’), not what would have caused them” (250). So both nomological covariance and incipient causes are necessary to determine intentional content. “Incipient causes are a special subset of actual causes” (251). Prinz (2002, 251) summarizes as follows:

X is the intentional content of concept C if (a) Xs nomologically covary with tokens of C and (b) an X was the incipient cause of C.

Prinz explains that clause (b) can solve the disjunction problem. Horses, not cows, are the basis on which the concept “horse” is formed. Not just any causes that happen to occur in the actual history of a concept can fall under the concept’s extension.

Turning back to the HOT theory of consciousness, I believe that it provides important ammunition against the charge that extant representational theories of mental content fail to account specifically for conscious representation, or what has been called “personal level representation” (Georgalis 2006; Kriegel, forthcoming). The complaint is that personal-level representation is a three-place relation (x represents y to S) as opposed to the two-place relation (x represents y) that dominates the literature. And it may well be true that representational theories of content by themselves cannot handle, or even ignore, personal-level representation. According to these theories, the process of content acquisition does indeed seem to occur at the unconscious or subpersonal level. But this should not be a surprise, especially if one is inclined to favor a reductionist approach. In my view, this is all the more reason to demand that a further metarepresentational level is needed for conscious states, which would include a personal-level representation and creature consciousness. This is exactly what HOT theory requires. If we have a plausible causal theory of mental content, but only for unconscious first-order states, then we can see why a HOT is also needed for explaining conscious states and content.13

  1. (2) Recall the earlier distinction between Russellian and Fregean contents. Unlike most reductive representationalists, I propose that we should make room for both kinds of content in characterizing a conscious mental state. I see little reason to adopt one at the expense of the other. The contents of conscious states include both Russellian and Fregean elements. Representationalists typically have in mind Russellian contents, but they are not normally thinking in terms of the HOT theory. An advantage of HOT theory is that it can explain how first-order conscious states can embody both kinds of content while retaining its reductionistic credentials. (p.37) So the content of, say, a first-order conscious perception is Russellian, but with the help of the relevant HOT, it is also Fregean. We might thus call the content of the resulting complex conscious state Fregellian. That is, the HOT will typically tell us the way that the objects (or properties) referenced in first-order states are presented to the subject. We might say that the mode of presentation is normally determined by the HOT’s content, that is, the way that the lower-order state is experienced by the subject. Thus, according to Fregellian content, a conscious state can be teased apart in a way that accommodates both Fregean and Russellian content. I qualify and further address the exact nature of the relationship between a HOT and its target in later chapters. Nonetheless this view is still reductive because what accounts for the Fregean content in a conscious state is itself unconscious. This move is not available to first-order representationalists because there is only one level of mental content.14

  2. (3) Given the foregoing construal of Fregellian content, it is also natural to allow for narrow content, at the least, in addition to wide referential content. Thus I suggest that we should opt for “moderate internalism” (or a “two-factor” theory) as opposed to what is called “extreme internalism” (Segal 2000). The extreme internalist holds that there is only narrow content, whereas the moderate internalist allows for both wide and narrow content. Recall from section 2.1 that we can understand narrow content in terms of whatever it is that molecular duplicates share from the first-person point of view, even if the relevant mental states are also individuated widely. Many who favor narrow content recognize that both narrow and wide contents are legitimate, depending on the context. While it is true that most reductive representationalists are extreme externalists who reject the viability of all narrow content, I believe that this is a mistake. Although it is not always easy to specify the nature of narrow content for concepts and intentional contents, there are compelling reasons to allow for it.15

I will not survey all the arguments for and against narrow content (see Brown 2008). My primary focus is on consciousness, not theories of content. Let me briefly offer two reasons to favor narrow content:

  1. (a) Many of us believe that in Putnam’s Twin Earth scenario there is still something mental that is shared between me and my twin with respect to water thoughts, although our intentional contents might differ when individuated widely. Similarly, suppose that two individuals (P and Q) are having subjectively indistinguishable experiences of an angry tiger, though Q is having a hallucination. One way to capture what they have in common is to resort to narrow content. Indeed, their brains are presumably in very similar states, despite the external differences.

  2. (p.38) (b) This last point highlights another motivation for narrow content, namely, that it is needed for causal and psychological explanation. For example, P and Q might behave in very similar ways, such as running screaming to safety. Narrowly individuated contents can parsimoniously explain the behavior of both P and Q. Indeed, it is presumably the narrow contents that cause the behavior, though there is not a tiger at all in the case of Q. As Carruthers puts it: “There is every reason to think that psychological laws (or nomic tendencies) should be framed in terms of contents which are individuated narrowly” (2000, 107). Although wide content has its purposes, narrow content is also needed for psychological explanation. It is important to recognize that narrow content can still be accommodated within a reductionist program although many of its proponents in fact reject reductionism.16

In conclusion, then, we have made significant progress in establishing the HOT Thesis. Reductive representationalism is a viable strategy to explain consciousness, and HOT theory is a plausible candidate for the task. Intentionality and genuine mental content do not automatically entail consciousness. But much more needs to be done to rule out similar theories of consciousness. I now turn to a critique of several close relatives of HOT theory.

Notes:

(1.) Higher-order representationalism was also arguably anticipated by Leibniz (Gennaro 1999) and Kant (Gennaro 1996). This idea has been revived over the past few decades by a number of philosophers, including Armstrong 1968, 1981; Lycan 1996, 2001a; and Rosenthal 1986, 1997, 2005.

(2.) See Husserl 1913/1931; Sartre 1956; Smith 1986, 2004.

(3.) The literature contains numerous excellent summary articles of these kinds of arguments, e.g., Gennaro 2005a; Kriegel 2007a; Levine 2007; Rowlands 2007; Kirk 2009; Stoljar 2009. See also Block and Stalnaker 1999; Hill and McLaughlin 1998; Perry 2001; and Kirk 2005. Some authors, for example, argue that some things seem possible but really aren’t. Much of the debate centers on various alleged similarities or dissimilarities between the mind–brain and water–H2O cases (or other scientific identities). Indeed, the issue of the exact relationship between “conceivability” and “possibility” is the subject of an important anthology (Gendler and Hawthorne 2002). See also Shear (1997) for specific responses to the hard problem and Chalmers’s counterreplies.

In response to McGinn, for example, one might first wonder why we cannot combine the two perspectives in certain experimental contexts. Both first-person and third-person scientific data about the brain and consciousness can be acquired and used to solve the hard problem. Even if a single person cannot grasp consciousness from both perspectives at the same time, why can’t a plausible physicalistic theory emerge from such a combined approach? Second, it may be that McGinn expects too much, namely, grasping some “causal link” between the brain and consciousness. After all, if conscious mental states are ultimately identical to brain states, then there may just be a “brute fact” that really does not need any further explaining. McGinn’s argument may even presuppose some form of dualism to the extent that brain states are said to “cause” or “give rise to” consciousness, as opposed to using the language of identity.

Much the same goes for Frank Jackson’s well-known (1982) “knowledge argument” against materialism. Jackson asks us to imagine a future where a person, Mary, is kept in a black-and-white room from birth, during which time she becomes a brilliant neuroscientist and an expert on color perception. Mary never sees red, for (p.307) example, but she learns all the physical facts and everything neurophysiologically about human color vision. Eventually she is released from the room and sees red for the first time. Jackson argues that it is clear that Mary comes to learn something new, namely, what it is like to experience red. This is a new piece of knowledge, and hence she must have come to know some nonphysical fact (since, by hypothesis, she already knew all the physical facts). Thus not all knowledge about the conscious mind is physical knowledge. One materialist reply is that Mary does not learn a new fact when seeing red for the first time, but rather learns the same fact in a different way. There is only the one physical fact about color vision, but there are two ways to come to know it: either by employing neurophysiological concepts or by actually undergoing the relevant experience and so by employing phenomenal concepts. For a thorough airing of the key issues, see Horgan 1984; Van Gulick 1985, 1993; Ludlow, Nagasawa, and Stoljar 2004; and Alter 2007. It is noteworthy that Jackson (2004) himself no longer takes the argument to refute materialism.

(4.) For more on phenomenal concepts, see Papineau 2002 and Carruthers 2005, chaps. 2 and 5. For much more on the arguments in question, see Carruthers and Veillet 2007; Diaz-Leon 2008; and the essays in Alter and Walter 2007. Tye (2000) was a believer in phenomenal concepts but has recently changed his mind on the issue for reasons I will not articulate here (Tye 2009b).

(5.) One might even develop an alternative HO theory of consciousness, a “quotational” HO theory, which attempts to use such concepts to explain conscious states (Picciuto 2011). For my own part, however, I prefer not to rely so heavily on the existence of phenomenal concepts.

(6.) For more formal versions of Searle’s argument and numerous replies, see, e.g., Van Gulick 1995a, 1995b; Kriegel 2003a; Graham, Horgan, and Tienson 2007; Shani 2007, 2008.

(7.) I distinguish thirteen such interpretations in Gennaro 1995.

(8.) To be fair, both Terry Horgan and Charles Siewart have acknowledged to me in conversation that PI is indeed consistent with reductionism. For further discussion of this issue by another HO theorist, see Lycan 2008.

(9.) I use the phrase “aware of” instead of Rosenthal’s “conscious of” mainly to avoid jargon and potential confusion as well as any appearance of circularity or regress. Rosenthal, of course, has in mind intransitive state consciousness being explained in terms of transitive consciousness.

(10.) Kriegel 2009a. For two actual arguments for the TP, see Janzen 2008, 69–84.

(11.) Thus I obviously disagree with Siewart (1998, 194–202) that something like the TP results from what he calls the “conscious-of trap” purely based on misleading language or an unjustifiable interpretation. It also seems to me that Siewart sometimes conflates unconscious HOTs with reflection or inner attention (i.e., conscious HOTs).

(p.308) (12.) See also Millikan 1984 and Papineau 1987. Once again, there are some excellent overview articles on causal theories, such as Rupert 2008 and Adams and Aizawa 2010. It may well be that something closer to conceptual role semantics (CRS) is more plausible as an account of mental content for some other kinds of concepts, such as nonexistent objects and logical relations. According to CRS, the meaning of propositional content is determined by the role it plays in a person’s language or cognitive system. The content of a representation is at least partly determined by the inferential connections that it bears to other representations. Overall, however, I take this view to be far less plausible for empirical concepts.

(13.) I revisit this overall theme of mental content and concept acquisition later, especially in chaps. 6 and 7.

(14.) I may be taking some liberties here in my characterization of “Fregean content” since the expression is sometimes used instead to refer to a condition on extensions rather than a psychological mode (or “manner”) of presentation (see Chalmers 2004, 171–173). Nonetheless, what Frege himself meant by “mode of presentation” and how it is related to “sense” is not always clear either. I am most interested in the way that our concepts determine how one experiences outer objects and properties.

(15.) Narrow content has also seen resurgence in recent years among both reductionists and nonreductionists (see Rey 1998; Segal 2000; Horgan and Tienson 2002; Prinz 2002; Chalmers 2003, 2010; and Kriegel 2008). Indeed, as we have seen, one prominent HOT theorist explicitly defends narrow content (Carruthers 2000, 85–86, 105–113). Some of the more technical discussion revolves around so-called two-dimensional semantics, which recognizes two dimensions of the meaning or content of linguistic items. In this approach, expressions and their utterances are associated with two different sorts of semantic values that play different explanatory roles. Typically, one is associated with reference and ordinary truth conditions, while the other is associated with the way that reference and truth conditions depend on the external world. I will not pursue this theme further here.

(16.) It is also well known that allowing for narrow content helps to deflect the force behind so-called inverted spectrum arguments against wide content. Much the same goes for Block’s (1990) well-known Inverted Earth argument, whose main target is wide content. I will not rehearse these arguments here but will say more about them in the next chapter. See Byrne 2010 for an excellent overview.