Jump to ContentJump to Main Navigation
Building Ontologies With Basic Formal Ontology$

Robert Arp, Barry Smith, and Andrew D. Spear

Print publication date: 2015

Print ISBN-13: 9780262527811

Published to MIT Press Scholarship Online: May 2016

DOI: 10.7551/mitpress/9780262527811.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 26 February 2021

What Is an Ontology?

What Is an Ontology?

Chapter:
(p.1) 1 What Is an Ontology?
Source:
Building Ontologies With Basic Formal Ontology
Author(s):

Robert Arp

Barry Smith

Andrew D. Spear

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262527811.003.0001

Abstract and Keywords

Words, pictures, theories, and ideas are representations. We use them primarily in order to represent entities in reality and sometimes also in other ways. An ontology is made out of multiple representational elements called ‘terms’. These elements are organized into networks by means of relational links, for example ‘mammal’ linked with ‘animal’ through the hierarchical relation: subtype. We define an ontology as a representational artifact, and explore the implications of this. We outline our preferred realist view of universals, and contrast it with alternative views (nominalism and conceptualism). For the realist, terms in ontologies primarily represent real universals (for example: mammal, cell, molecule), while defined terms and empty terms represent special cases. We distinguish ontologies from representational artifacts of other sorts, including terminologies, and contrast the realist method for ontology development with the concept orientation often favored by terminology developers.

Keywords:   Representation, universal, particular, hierarchy, defined class, empty terms, relations, realism, nominalism, concept orientation

Introduction

In order to design an ontology, it is important to understand just what an ontology is. Only on this basis can we be clear about both the steps that should be taken in ontology design and the kinds of pitfalls that should be avoided. The goal of this chapter and the next is to provide the basic definitions and distinctions in whose terms the process of ontology design can best be understood. Our definition of “ontology” is the following:

ontology = def. a representational artifact, comprising a taxonomy as proper part, whose representations are intended to designate some combination of universals, defined classes, and certain relations between them1

This definition employs a number of terms that are themselves in need of defining. Understanding these terms and the rationale behind their inclusion in the definition will take us a long way toward understanding what an ontology is.

The first term “taxonomy” we can define as follows (where here and in all that follows “universal” and “type” are used as synonyms):

taxonomy = def. a hierarchy consisting of terms denoting types (or universals or classes) linked by subtype relations

The most familiar kinds of taxonomies are those we find in biology (taxonomies of organisms into genera and species, as illustrated in figure 1.1). But taxonomies can be found also in any domain where it is possible to group things together into types or universals based on common features. We will discuss taxonomies in greater detail further on.

By “hierarchy” we mean a graph-theoretic structure (as in figure 1.1) consisting of nodes and edges with a single top-most node (the “root”) connected to all other nodes (p.2)

What Is an Ontology?

Figure 1.1 Fragment of a simple taxonomy of vertebrate animals

through unique branches (thus all nodes beneath the root have exactly one parent node).

By “types” or “universals” we mean the entities in the world referred to by the nodes (appearing here as boxes) in a hierarchy; in the case of figure 1.1, biological phyla, classes, and orders. As we will regularly use the term “entity” in a broad and generic sense, we here provisionally define it as follows:

entity = def. anything that exists, including objects, processes, and qualities

“Entity” thus comprehends also representations, models, images, beliefs, utterances, documents, observations, and so on.

Ontologies Are Representational Artifacts

Ontologies represent (or seek to represent) reality, and they do so in such a way that many different persons can understand the terms they contain and so learn about the entities in reality that these terms represent. Ontologies in the sense that is important to us here are designed to support the development, testing, and application of scientific theories, and so they will to a large degree be about the same sorts of entities as are represented by the general terms in scientific textbooks. Ontologies consist of terms arranged together in a certain way, and terms are an important subtype of representations:

representation = def. an entity (for example, a term, an idea, an image, a label, a description, an essay) that refers to some other entity or entities

When John remembers the Tower Bridge in London, then there is a representation in his mind that is about or refers to an entity other than itself, namely a certain bridge over the River Thames. Similarly, when Sally looks through a microscope at bacteria arrayed on a glass slide, then there are thoughts running through her mind to the effect (p.3) that “these are E. coli that I am seeing.” These thoughts involve representations that point beyond themselves and make reference to certain entities on the side of reality—in this case bacteria on the slide. It is one of the most basic features of human thought that beliefs, desires, and experiences in general point beyond themselves to certain entities that they are about. Note that a representation (for example, your memory of your grandmother) can be of or about a given entity even though it leaves out many aspects of its target. Note, too, that a representation may be vague or ambiguous, and it may rest on error.

Artifacts

artifact = def. something that is deliberately designed (or, in certain borderline cases, selected) by human beings to address a particular purpose

“Artifact” comes from the Latin ars, meaning “human skill” or “product.” Artifacts include such things as knives, clothing, paperweights, automobiles, and hard drives. All artifacts are public entities in the sense that they can at least in principle be available to and used by multiple individuals in a community.

Representational Artifacts

Representational artifact = def. an artifact whose purpose is one of representation.

Thus a representational artifact is an artifact that has been designed and made to be about something (some portion of reality) and using some public form or format. Representational artifacts include things such as signs, books, diagrams, drawings, maps, and databases.

A key feature of representational artifacts of the sorts important to us here is that they come with rules for their interpretation. Maps do not come merely color coded, they also come with a legend or table that makes it possible to interpret their color coding as representing certain kinds of entities (countries, oceans, mountain ranges, etc.). Such legends have many of the features of ontologies, including the feature of supporting information integration; for example, maps that use a common legend can be more easily compared and combined.

A simple kind of representational artifact would be a drawing made by Sally of Tower Bridge based on her memory of how it looked when she visited London some years earlier. Sally’s memory, and the images in her mind, are cognitive representations. Her drawing, in contrast, is a representational artifact that exists independently of such cognitive representations and transforms them into something that is publicly observable and inspectable. Just as Sally’s memory of Tower Bridge can be better or worse, (p.4) more or less accurate, so also the representational artifact that she creates on the basis of this memory can be better or worse, and more or less accurate as a representation of the entity to which it is intended to refer.

An ontology is an artifact, since it is something that has been deliberately produced or constructed by human beings to achieve certain purposes, and there is a sense in which—by analogy to Sally’s drawing—it serves to make public mental representations on the part of its human creators. While not all representational artifacts are ontologies, all ontologies are representational artifacts, and thus everything that holds of representational artifacts in general holds also of ontologies.

Representational Units and Composite Representations

Representational units and composite representations are very common types of representations—encompassing practically the whole world of documents, which use written or printed language to represent things in the world. For example, the composite representation “John is drinking a glass of water,” asserted by someone who is watching John, picks out a process in the world. The representational units in this composite representation include “John” and “glass”; these are the smallest referring bits of language contained within the sentence (“J,” “w,” and so on do not refer to or represent anything). Other examples of representational units include icons, names, simple word forms, or the sorts of alphanumeric identifiers we might find in patient records or automobile parts catalogs.

representational unit = def. a representation no proper part of which is a representational unit

composite representation = def. a representation built out of constituent subrepresentations as its parts, in the way in which paragraphs are built out of sentences and sentences out of words

Note that many images are not composite representations in the sense here defined, since they are not built out of smallest representational units in the way in which molecules are built out of atoms. (Pixels are not representational units since they are not representations.) Maps are typically built out of parts that include both representational units (for example, names of towns or hills) and image-like elements (for instance, shading used to represent inclines).

A Note on “Term”

In the following pages we will often make use of the word “term” to refer to the singular nouns and singular noun phrases that form the representational units and (p.5) composite representations in an ontology. The terms in an ontology are the linguistic expressions used in the ontology to represent the world, and drawn as nearly as possible from the standard terminologies used by human experts in the corresponding discipline. (Thus terms are distinct from identifiers of the sorts used in programming languages or from the alphanumeric IDs used in serial numbers or on credit cards.) Examples of terms in our sense include:

aorta

resident of Cincinnati

blood pressure

surgical procedure

smoking behavior

temperature

population

patient

blood glucose level

Terms in this sense can refer to single entities, collections of entities, or types of entities.

The question of what terms an ontology should include is determined (a) by the selected scope of the ontology (which is determined in turn by the purpose the ontology is designed to address), (b) by the available resources for population of the ontology, (c) by the structure of the domain that the ontology is intended to represent, and (d) by consensus among scientific experts about what the relevant entities are in that domain and about what they are to be called.

Ontology, Terminology, Conceptology

In our approach to ontology we assume that it is uncontroversial that ontologies should be understood as a kind of representational artifact, and that the entities represented are entities in reality—such as cells, molecules, organisms, planets, and so forth. Some ontologies contain terms which do not refer to any entities at all because—unknown to the developers—some type of error has been made. But even in those cases the terms in question are included in the ontology with the intention that they should refer. (Something like this was true, in former times, in the case of terms such as “phlogiston” and “ether.”)

The relation between term and referent is to be understood by analogy with the relation of external directedness that is involved, for instance, when we assert that “Oxford” refers to Oxford, or “Ronald Reagan” refers to Ronald Reagan. This is true even where, as in ontologies such as the Mental Functioning Ontology (MFO),2 terms refer to entities—for example, mental processes—that are internal to the mind or brain of human beings. Terms such as “mental process” too, as they appear in ontologies, are intended to refer to portions of reality in just the same sense as do terms referring to physical entities such as molecules or planets.

Confusion arises here in virtue of the fact that, in addition to the relation of reference or aboutness between terms in MFO and their mental targets in reality, there is (p.6) another sort of relation between language and mind, which we can call the relation from term to concept. This latter relation holds in virtue of the fact that, when people use terms, they may associate these terms with mental representations—sometimes called “concepts”—of various sorts.

Ontology and Terminology: The Case of ISO

The relation from term to concept has played a central role in the discipline known as terminology research, which is in some ways a precursor to contemporary ontology. Terminology research grew as a means of coping with the large technical vocabularies used especially in areas such as commerce, manufacturing, and transport relevant to international trade, and the terminologist is interested in the usage of terms specifically from the point of view of standardization and of translation between technical languages. Terminology research is focused on concepts because, in the eyes of the terminologist, what is transmitted when a term is translated from one language into another is precisely some concept, which the users of the respective languages are held to share in common.

A view along these lines forms the foundation of the terminological work of the International Standards Organization (ISO), which pursues the goal of bringing about an “ordering of scientific-technical knowledge at the level of concepts” (emphasis added).3 ISO hopes in this way to support the work of translators, and also to support the collection of data that is expressed in different languages. ISO Standard 1087–1, for example, sees terms as denotations of concepts, defining “concept” as follows:

  • A unit of thought constituted through abstraction on the basis of characteristics common to a set of objects.4

The background to this definition is a view of concept acquisition rooted in the phenomenalist ideas of the Vienna Circle.5 Concepts are acquired, on this view, in virtue of the fact that, as we sense objects in our surroundings, we detect certain similarities—for instance between one horse and another, or between one red thing and another. We then learn to conceive the characteristics responsible for such similarities in abstraction from the objects that possess them.

Concepts are then formed through combination of such characteristics. Characteristics can be combined into concepts in many ways (for instance: {red, spherical}, {diseased, female, nonsmoker}, {with tomato sauce, with mozzarella, with pepperoni}), and for each such combination of characteristics there is, in principle at least, a corresponding concept. Equivalence between terms in different languages is a matter of correspondence between the corresponding bundles of characteristics. The terms are “equivalent,” according to ISO, if and only if they denote one and the same concept.

(p.7) What ISO leaves out of account—and what is left out of account by the ontologists who have been inspired by ISO—is the question of how we gain access to such concepts, entities that are alleged to exist at some language-independent level. Note, too, that ISO’s own approach to standardization does not consistently follow an approach “on the level of concepts” of this sort. ISO Standard 3166–1, for example, defines a widely used set of codes for identifying countries and related entities. Currently ISO 3166–1 assigns official two-letter codes to 249 countries, dependent territories, and areas of geographical interest. The code assigned to France, for example, is ISO 3166–2:FR. And the code is assigned to France itself—to the country that is otherwise referred to as Frankreich or Ranska. It is not assigned to the concept of France (whatever that might be).

The Concept Orientation

We do not deny that mental representations have a role to play in the world of ontologies. When, for example, human biocurators use an ontology to tag data or literature or museum catalogs, then they will have certain thoughts or images in their minds. And if “concept” is used to refer to their understanding of the meanings of the terms they are using, then they can also be said to have concepts in their minds. Doctors, similarly, can be said to have concepts in their minds when diagnosing patients. Indeed, when a doctor mis diagnoses a patient then it is tempting to say that there was only the concept in his mind—and that there was nothing on the side of the patient to which this concept would correspond.

For this and other reasons, including the influence of ISO, the view of ontologies as representations of concepts has predominated especially in the field of medical or health informatics.6 More recently, however, this “concept orientation” has been challenged by the “realist orientation” that is defended here.7 The goal of ontology for the realist is not to describe the concepts in people’s heads. Rather, ontology is an instrument of science, and the ontologist, like the scientist, is interested in terms or labels or codes—all of which are seen as linguistic entities—only insofar as they represent entities in reality. The goal of ontology is to describe and adequately represent those structures of reality that correspond to the general terms used by scientists.

Philosophical and Historical Background to Conceptualism

Another source for the view that the terms in ontologies represent our concepts of reality is epistemological, and draws on those strands in the history of contemporary ontology that connect ontology to the artificial intelligence/computer science field of (p.8) what is called “knowledge representation.”8 Since, it is argued, our knowledge is made up of concepts, representing knowledge—which means in this context roughly: representing logically the beliefs or the ontological commitments of scientists9—must imply representing concepts. This assumption in turn often goes hand in hand with a view to the effect that we cannot know reality directly or know the things in reality as they are in themselves, but rather that we have access to reality only as it is mediated by our own thoughts or concepts.

This is not a new view in the history of philosophical thinking about knowledge. Epistemological representationalism, for example, a view embraced by Kant, is the doctrine to the effect that our perceptions, thoughts, beliefs, and theories are most properly conceived as being about our constructions or projections, and only indirectly (if at all) about mind-independent entities in some external reality. Epistemological idealism, on the other hand, is a more extreme doctrine to the effect that our perceptions and thoughts are not about reality at all, but are entirely about mental objects such as perceptions, appearances, ideas, or concepts, because—for the idealist—that is all there is. In the formulation of the Irish philosopher George Berkeley, for example, “to be is to be perceived.” Analogously, in the field of “knowledge-based systems,” an ontology has been defined as “a theory of what entities could exist in the mind of a knowledgeable agent.”10

Echoing such views, many in the field of knowledge representation have held that ontologies should be understood primarily as representing conceptual items. For example, Tom Gruber, the leader of the ontologist team that gave rise to the iPhone Siri app, influentially defined an ontology as “a formal specification of a shared conceptualization.”11

Realism and Ontology

The view of ontology defended here, in contrast, is one according to which the terms in the ontology represent entities in the world—we might say that the ontology encapsulates the knowledge of the world that is associated with the general terms used by scientists in the corresponding domain.

There is a long and detailed history of debates in philosophy about whether we can have knowledge of an external world, and it is not our intent to rehearse these debates here. However, we can assert with confidence that representationalist and idealist positions are far from constituting the majority view among philosophers, either in the history of philosophy or in the philosophy of today. In regard to the latter some empirical evidence is provided by the results of the survey presented in table 1.1. Of the 931 philosophy faculty surveyed, only 4.3 percent supported idealism while 81.6 percent favored some form of nonskeptical realism.12 (p.9)

Table 1.1 PhilPapers survey results

External world: Idealism, skepticism, or nonskeptical realism?

Accept or lean toward: nonskeptical realism

760/931 (81.6 percent)

Accept or lean toward: skepticism

45/931 (4.8 percent)

Accept or lean toward: idealism

40/931 (4.3 percent)

Other

86/931 (9.2 percent)

It is indeed true that we cannot perceive reality except by means of the specific sensory and cognitive faculties that we possess. But this in itself provides no reason for thinking that the experiences and concepts that we have do not provide us with information about reality itself. It would provide such a reason only if we had some evidence that our sensory and cognitive faculties were unable to apprehend reality—and this is precisely what is at issue.13

Certainly our cognitive faculties do not deliver the entire truth about reality; but this does not mean that the information that they do deliver should be viewed as nonrepresentative of how reality in fact is. For this, a separate argument is needed. On the position defended here, a version of epistemological realism, the most plausible way of understanding the relation between our cognitive faculties and reality is that our faculties—much like spectacles, microscopes, and telescopes—do indeed provide us with information about reality. They do this a little bit at a time, at different levels of granularity, and with occasional need for correction. One source of this correction is the application of the scientific method, which is itself an ongoing process of data collection and theorizing, using human perceptions supported by scientific experiments, and yielding results which are in their turn still fallible but also to a degree self-correcting in the course of time.

A further argument against the view of ontologies as representing concepts is heuristic in nature. It turns on the fact that acceptance of this view on the part of the developers of ontologies encourages certain kinds of errors, most prominently the sorts of use-mention mistakes that we have already touched upon in the introduction. The Systematized Nomenclature of Medicine (SNOMED),14 a leading international clinical terminology, defined a “disorder” in releases up to 2010 as “a concept in which there is an explicit or implicit pathological process causing a state of disease which tends to exist for a significant length of time under ordinary circumstances.” At the same time it defined “concepts” as “unique units of thought.” From this it follows that a disorder is a unit of thought in which there is a pathological process causing a state of disease, (p.10) so that to eradicate a disorder would involve eradicating a unit of thought. Recognizing its own confusion in this respect, versions of SNOMED since July 2010 have contained the warning:

Concept: An ambiguous term. Depending on the context, it may refer to the following:

  • A clinical idea to which a unique ConceptId has been assigned.

  • The ConceptId itself, which is the key of the Concepts Table (in this case it is less ambiguous to use the term “concept code”).

  • The real-world referent(s) of the ConceptId, that is, the class of entities in reality which the ConceptId represents.15

Accurately Representing Entities in Reality

What are the implications of our realist view for the understanding of representational artifacts such as ontologies and the terms they contain? Suppose, again, that Sally attempts to create a representational artifact that makes reference to Tower Bridge by drawing a picture. Our view is that it is here not the mental representation in her head, or the memories in her head, that Sally is trying to draw; rather, it is Tower Bridge itself. Should Sally have an opportunity to see the bridge again in the future and to compare it with the drawing that she has made, she may well identify a mistake or an absence of detail in the drawing and decide to correct it in order to create a more accurate representation—and this is so even if her original memory of the bridge contains no such additional information. Additionally, if other people look at the drawing of Tower Bridge and criticize its accuracy, they will engage in this criticism by citing facts, not about a memory or a mental representation, but about the drawing and about the bridge itself. Conceivably, Sally’s memory may be in error, so that the drawing is discovered to be not of Tower Bridge but of, say, Chelsea Bridge. Then she would need, not to correct or enhance her drawing itself, but rather to assign it a new label.

All of this holds true, too, of the representations created by scientists. When constructing such a representation—whether it be a scientific theory presented in a textbook, or the content of a journal article or of a database—the goal is not to represent in a publicly accessible way the mental representations or concepts that exist in the scientists’ minds. Rather, it is to represent the things in reality that these representations are representations of. When one queries the Gene Ontology Annotation Database, for example, in order to find out which HOX gene is responsible for antenna development in Drosophila melanogaster, then one is not interested in the conceptions or mental representations of the authors of the database or of the journal articles that lie behind (p.11) it; rather, one is interested in the HOX gene itself, and in the process of antenna development in flies.

Respecting the Use-Mention Distinction

We have referred already to the use-mention distinction—the distinction between using a word to make reference to something in reality, and mentioning the same word in order to say something about this word itself. Thus it is one thing to consult (use) the periodic table in order to learn something about the chemical elements; it is quite another thing to talk about (mention) the periodic table as an important innovation in the history of human knowledge. We pointed out that confusion of use and mention is a common type of error in the building of ontologies—an error closely related to the view that terms in ontologies represent or denote concepts in people’s minds.

All that is needed to avoid such errors is careful use of language. Thus one can use the phrase “Tower Bridge” to refer to an object in reality, as in “Tower Bridge is a well-known structure on the River Thames in London.” However, one can also mention the same phrase, as in “‘Tower Bridge’ is used by speakers of English to refer to a structure on the River Thames in London” or “‘Tower Bridge’ is made up of eleven letter tokens of nine letter types from the Latin alphabet.”

Similar considerations apply to the drawing of Tower Bridge discussed earlier. We can use such a drawing in order to explain to someone what Tower Bridge is, and what its characteristic features are. In this case, the drawing is being used as a representation of a certain bridge in London. But we can also mention the drawing, making it and its properties the explicit theme of discourse, for instance in “this drawing is made with paper and pencil,” or “this drawing is 100 years old.” We then make assertions that are about the representation itself, not about that to which it refers.

Use-mention errors are a very common mistake in terminology-focused areas of information technology—something that may have to do with the habit of many computer modelers to employ the very same terms for the elements of the models inside their computers as are used to refer to the real-world objects that these elements stand proxy for. As Daniel Dennett notes, computer and information scientists are often desensitized to use-mention problems because the objects to which their terms refer are entities that are properly at home inside the computer (or inside the realm of mathematical entities).16 In this way refrigerators become identified with (are “modeled” as) refrigerator serial numbers; persons are identified with social security numbers. The following definition of “telephone” was proposed within the Health Level 7 (HL7) community in 2007: “Telephone: a telephone is an observation with a value having (p.12) datatype ‘ Telecom.’”17 As we shall see in chapter 4, a definition of a term in an ontology is a statement of the necessary and sufficient conditions an entity must satisfy to fall under this term. What one gets from HL7 and similar attempts to “model” healthcare reality is an explanation of how the term “telephone” could be used as a part of the representational artifact that is HL7. The use-mention conflation turns on the fact that a telephone is confused with the datatype of a certain representation of a telephone in a certain model. Microsoft HealthVault, similarly, defines a health record item as “a single piece of data that is accessible through the HealthVault Service”;18 it then defines “allergy” as a class that “represents a health record item type that encapsulates an allergy.”19 So, an allergy is defined not as a type of medical condition but rather as a piece of data within the Microsoft HealthVault.

Use-mention confusions need not be fatal in the hands of skilled computer modelers; they are, though, fatal when it comes to building coherent ontologies.

Ontologies Represent Universals, Defined Classes, and the Relations Between Them

At the beginning of this chapter we defined an ontology as “a representational artifact, comprising a taxonomy as proper part, whose representations are intended to designate some combination of universals, defined classes, and certain relations between them.”

So far we have discussed in great detail the first part of this definition: the idea that an ontology is a representational artifact, and we have argued that the best way to understand the goal of building ontologies as representational artifacts is to see ontologies as representing entities in reality rather than concepts or other kinds of mental representations in the minds of human beings. We now turn to the second part of this definition, which specifies what it is in reality that is intended when we speak of “universals, defined classes, and certain relations between them.”

The Goal of Science Is to Represent General Features of Reality

It is a basic assumption of scientific inquiry that nature is at least to some degree structured, ordered, and regular. Scientific experimentation involves in every case observations of particular instances of more general types—this eukaryote cell under that microscope, this portion of H2O in that flask, the cancer in Frank’s body (where the terms in italics pick out universals). “This” and “that” and “Frank,” here, pick out instances that can be observed in the lab or clinic. The ultimate goal of science is to use observations and manipulations of such particulars20 in order to construct, validate, or falsify (p.13) general statements and laws; the latter will then in their turn assist in the explanation and prediction of further real-world phenomena at the level of instances.

Ontology is concerned with representing the results of science at the level of general theory (the generalizations and laws of science), not of particular facts. More precisely: it is directed at encoding certain sorts of information about the general features of things in reality, rather than information about particular individuals, times, or places.

Ontological Realism

The question thus arises as to what exactly is it that is general in reality, and what the general terms used by scientists in the formulation of their theories are supposed to be about. The question what is it that is general in reality? is roughly the question of what it is that makes scientific generalizations and law-like statements true. What do all the entities that a term such as “eukaryote cell” refers to have in common that makes them together form a type or universal?21 Our preferred answer to this question, which we call ontological realism, says that there is some eukaryote cell universal of which all particular eukaryote cells are instances. On this view, universals are entities in reality that are responsible for the structure, order, and regularity—the similarities—that are to be found there. To talk of universals is to talk of what all members of a natural class or natural kind such as cell, or organism, or lipid, or heart have in common. Thus we capture the fact that the members of a particular kind are in some respect similar by asserting that they instantiate certain corresponding universals. Universals are repeatable, in the sense that they can be instantiated by more than one object and at more than one time, whereas particulars, such as this specific cell, your cat Tibbles, and the first city manager of Wichita, are nonrepeatable: they can exist only in one place at any given time.

In the history of Western philosophy, the form of realism that recognizes universals as existing in their instances in this way has roots in the work of Aristotle. According to Aristotle (on the simplified reading that we presuppose here), universals are mind-independent features of reality that exist only as instantiated in their respective instances. A mind can, by attention and abstraction, grasp universals instantiated by particular things—for example, the two universals redness and ball can be abstracted from the several particular red balls that we see lying around on the floor. These particulars have in common that they are all red and that they are all balls. The universals redness, ball, and spherical shape then exist in these particular instances that we see on the floor. For Aristotle, there must always be particulars (instances) that “ground” the existence of universals in the sense that the universals depend for their existence upon (p.14) these particulars. On our view, it is such universals that are the primary objects of scientific inquiry, and thus also the primary objects to be represented in a scientific ontology.

We depart from Aristotle in a number of ways, however, many of which have to do with the fact that Aristotle lived before the Darwinian era. One important difference is that we allow universals not only in the realm of natural objects such as enzymes and chromosomes, but also in the realm of material artifacts such as flasks and syringes, and also in the realm of information artifacts such as currency notes and scientific publications.

Metaphysical Nominalism

The major alternative to realism about universals is nominalism. Nominalism is the claim that only particular (nonrepeatable) entities exist. There are no (repeatable) universals; nothing general or common from one object to another in reality at all. When we refer to some kind or category of things like cell, electron, molecule, or spherical shape, we are merely using a name (the term “nominalism” derives from the Latin nomen for “name”) that stands for the plurality of relevant particular entities. There is, on this view, nothing in reality that is responsible for the order and regularity that we seem to observe in nature, or for the fact that things belonging to a kind exhibit similarity with respect to certain features or properties.

For extreme nominalists the general terms that we use pick out collections of entities in reality that are arbitrary in the sense that they reflect merely our chosen groupings of entities and the attachment thereto of general words or concepts. There are no bona fide joints or divisions in reality at the level of kinds; all judgments pertaining to what is general involve our having imposed some order on a reality that does not possess such order in and of itself.

As we have seen, there are some in informatics research who hold a view along these lines according to which an ontology is something like a conceptualization of reality—something that represents (say) a scientist’s view of reality, rather than general features on the side of reality itself. At least some of the proponents of this view embrace it because they believe that we cannot know about reality but only about our own conceptualizations.22 This leaves only concepts as objects of the general knowledge that the sciences aim to achieve (which seems to us to imply that the whole of science would be a branch of psychology, or linguistics).

The debate between realists and nominalists about the status of what is general in reality, too, has a long history, but we shall limit ourselves here to just two of the reasons why we adopt the realist side in this debate.

(p.15) First, it is not at all clear that nominalists do indeed provide an account of how general terms and concepts can be applied to reality that does in fact avoid the appeal to things like universals. Consider, for example, the explanation of the biological category mammal proposed by the school of what are called “resemblance nominalists.”23 The word “mammal,” they say, is a word that human beings have found it useful to employ in order to group together certain individuals in reality based on perceived or supposed similarities or resemblances among these things. Where an ontological realist will want to insist that there exists a genuine characteristic or feature (a universal) that is common to all of these things, the resemblance nominalist will insist that there exist only the individuals and the relations of similarity among them, and nothing more. But what is to be said about these “relations of similarity” or “resemblances” themselves? Presumably there is some “relation of similarity” R that obtains between all mammals on the one hand, and also some “relation of similarity” R* that obtains between all plants on the other. Clearly R and R* must be different relations, otherwise we would regularly mistake mammals for plants and conversely. So the question for the nominalist is: what is it that makes all instances of observed similarity R the same as one another and also different from all instances of observed similarity R*? While denying the existence of universals such as mammal or plant the nominalist is in danger of simply reintroducing them at the level of the different kinds of similarity relations holding among the corresponding different kinds of things.

A second point against nominalism is that it leaves us with no explanation of the success of science, which enables successful predictions precisely on the basis of general laws. We know of no way to understand this ability except by appeal to the assumption that science does this by concerning itself not with particular instances of things, but rather with repeatable features, forming general patterns or structures, that are instantiated in particular things. Lipid, for example, is a universal that scientists are able to identify not by virtue of what is specific to the fats in John’s body, or the sterols in Professor Jones’s lab, or the fat-soluble vitamins in the bottle at the local pharmacy, but rather by virtue of the universal features or characteristics shared by all of these particular instances.

Universals and Particulars

Particulars, in opposition to universals, are individual denizens of reality restricted to particular times and places. Particulars instantiate universals, but they cannot themselves be instantiated. In virtue of instantiating the same universal, two particulars will be similar in certain corresponding respects. Particulars exist in space and time. It is possible to interact with, directly see with one’s eyes, as well as touch and smell, (p.16)

Table 1.2 Borges’s Celestial Emporium of Benevolent Knowledge

In his “The Analytical Language of John Wilkins,” Jorge Luis Borges describes “a certain Chinese Encyclopedia,” the Celestial Emporium of Benevolent Knowledge, in which it is written that animals are divided into

  1. 1. those that belong to the Emperor

  2. 2. embalmed ones

  3. 3. those that are trained

  4. 4. suckling pigs

  5. 5. mermaids

  6. 6. fabulous ones

  7. 7. stray dogs

  8. 8. those included in the present classification

  9. 9. those that tremble as if they were mad

  10. 10. innumerable ones

  11. 11. those drawn with a very fine camelhair brush

  12. 12. others

  13. 13. those that have just broken a flower vase

  14. 14. those that from a long way off look like flies

Source: Jorge Luis Borges, Other Inquisitions: 1937–1952 (Austin: University of Texas Press, 2000), 101.

photograph, or weigh, particulars of many sorts. Universals, in contrast, are accessible only via cognitive processes of a more complex sort.

How do we establish whether a given general term (such as “H2O molecule” or “cell” or “mammal” or “sport utility vehicle” or “former fan of ABBA”) picks out a universal? The answer to this question is not an easy one to formulate (any more than would be the answer to questions such as “How do we establish whether a given statement is true?” or “How do we establish whether a given statement is something that is known to be true, or expresses a law of nature?”). However, just as we can distinguish clear cases of truths (that red is a color) and falsehoods (that the earth is shaped like a cube), so we can distinguish also certain clear cases of general terms that do designate universals (such as names of chemical elements) and certain clear cases of general terms that do not designate universals (such as the majority of the terms listed as designating types of animals in table 1.2).

And while we cannot give any algorithm for determining how such terms are to be identified, a number of rules of thumb for such determination are provided in box 1.1. If a single general term yields a positive answer to all, or almost all, of these questions then this is a strong positive indication that it refers to a universal. (p.17)

The decision as to whether a given term does designate a universal may, however, in every case be revised. We may discover, for instance, that a general term refers to multiple distinct diseases (as was the case with “diabetes” and “hepatitis”). Such revisability is not, however, a concern relating specifically to our treatment of universals or of the general terms in ontologies; rather, it is an ineluctable feature of science as a whole.

Empty or Potentially Empty General Terms

An ontology is a representational artifact whose purpose is to represent what is general in reality. An ontology, in other words, is concerned with representing universals. At any given stage in its development science will, for many general terms, give us confidence in the belief that the terms in question designate universals. On the other side, there are some candidate ontology terms where we have similar confidence that they do not denote a universal—for example, “unicorn” or “perpetual motion machine” or “regular smoker and is identical to some prime number.” Such terms do not designate anything at all in the strong sense that there are no particulars to which the terms in question can be correctly applied.

There are, however, examples of general terms for which it is not clear whether or not they designate universals. In the eighteenth century this was briefly the case with the term “phlogiston,” until the term fell out of favor; until recently it was the case with the term “Higgs boson.” Such cases arise particularly in those areas where at any given time the most exciting scientific advances are taking place. In general, ontologies (p.18)

What Is an Ontology?

Figure 1.2 An experimental ontology created when use of the term “Higgs boson” was still considered speculative

will be created to capture the content of established scientific theories—the sort of content that is expressed in textbooks for use by new generations of scientists who want to learn the general theoretical framework that forms the basis of new and controversial hypotheses and methods. Ontologies can in such cases be created experimentally, in order to capture the content of one or more of the alternative hypotheses currently being explored at the fringes of established science. But use of a term in such an ontology remains tentative—in the sense that no ontological commitment is involved—until scientific disputes are resolved and either the term falls out of favor or some referent is securely attached.

Such experimental, or provisional, ontologies (see, for example, figure 1.2) are the equivalent of setting aside terms or codes for future use, for example, when creating a database of serial numbers for items in production. Some serial numbers will be used, in due course, for tracking items actually produced; others might be used for inventory planning or similar purposes, but again in a way that remains tentative—in the sense that no ontological commitment is involved.24

Universal vs. Class

An additional set of problems is created by those general terms often used by science to refer to particulars in reality but in relation to which (in light of our questions in box 1.1) there is no corresponding universal. Examples are “smoker in Leipzig,” “person of the Hindu religion who has bathed in the Ganges,” “Finnish spy,” and so on.

To see how, within the realist framework, such cases are to be dealt with we start with the distinction between universals on the one hand, and the classes which form (p.19) their extensions on the other. The universal cell membrane, for example, has as its extension the class of all cell membranes. It is not only universals that have extensions, but also general terms. The extension of the general term “cell membrane” is identical to the extension of the universal cell membrane; however, even those general terms for which there is no corresponding universal providing they are nonempty will have extensions also.

A class, on our view, is defined as a maximal collection of particulars falling under a given general term. All extensions of (nonempty) general terms are classes. (We leave open the issue as to whether empty general terms have extensions.) Thus the class of mammals, for instance, is the maximal collection of all mammals. The class of H2O molecules is the maximal collection of all H2O molecules. The term “mammal” applies to every member of this class, and every particular to which the term applies is a member of this class. Each universal has a corresponding maximal class as its extension. We might call such classes “natural classes.” The class of all human individuals with less than an inch of hair on their heads picks out a class of individuals in reality, but it is not a natural class, and so there is little reason to think of this class as corresponding to a universal.

But there may be reasons nonetheless to include such a term in an ontology. In performing clinical research, for example, we may have data pertaining to “human beings diagnosed with hypertension,” “human beings born in Vermont,” “human beings whose mother has died,” and so on. Classes corresponding to terms such as these are demarcated on the basis of selection criteria defined by human beings. Thus we will refer to them in what follows as “defined classes.”

There are at least two recognizably distinct families of defined classes:

  1. 1. Classes defined by general terms abbreviating logical combinations of terms denoting universals. These classes can be divided into two groups:

    1. a. defined by selection: for example, woman with green eyes, protein molecule which has undergone a process of phosphorylation, disinfected scalpel. Such classes are subclasses of extensions of given universals, and in the simplest cases they are defined through logical conjunction; they often involve features pertaining to the histories of the entities in question, and include cases where what transpired historically leaves no physical change in the entities in question;

    2. b. defined by combination: here the class definition comprehends members which instantiate two or more nonoverlapping universals, for example, current cost item (defined as either cash or account receivable), employee (defined as either waged employee or salaried employee). Such classes are unions of extensions of given universals, and in the simplest cases they are defined through logical disjunction;

  2. (p.20) 2. Classes defined by general terms abbreviating logical combinations of terms denoting universals with terms denoting particulars, for example, woman currently living on the north coast of Germany, male athlete born after 1980, individual in the Western Hemisphere currently infected by HIV.

Note that in some cases of terms involving logical combinations of terms designating universals, the resulting compound terms will themselves designate universals. The terminological practices of biology, which are our best guide as to what universals biologists are committed to ontologically, tell us that the term defined through the conjunction of the two universals eukaryote and cell itself refers to a more specific universal eukaryote cell. These terminological practices tell us also, however, that even though mammal and electron are universals, there is no universal mammalian electron—the term in question does not track scientifically interesting similarities among the entities in nature. (Thus also it would be a mistake to include this term in an ontology that is designed to support scientific reasoning.) The term “mammalian electron” might, however, be added to an ontology to support a particular application. It would then refer to a defined class falling under subfamily (1a) in our list.

Similarly, terms defined as described in (1b) pick out defined classes rather than universals because of their explicit inclusion of a disjunction as a primary feature of their definitions. A term such as “mammal or bacterium” does not designate a universal because such a term does not pick out a scientifically interesting collection of entities. Disjunctive terms of this sort, while again they may be useful for certain practical purposes, will in almost all cases refer only to what we can think of as gerrymandered classes.

To see why terms defined as described in (2) do not pick out universals, consider the expression “woman currently living on the north coast of Germany.” This refers to a particular collection of particular women in a specific location and at a specific time. Working through the criteria provided in box 1.1: it is possible to point to individuals who are instances (or better: members) of this class (1), however the characteristics identified in this class are not open-endedly repeatable (2), do not contain only general terms (3) (“currently” and “north coast of Germany” refer to particular times, places, and countries), and surely do not figure prominently in any scientific laws or theories (4). Again, such classes are often of interest to scientists working on particular issues or problems, for example in the context of public health analysis or clinical trials involving specific subject populations. A scientist may, for example, be studying incidence of diabetes in nonsmoking juveniles in downtown Baltimore born in a given year. However, any scientific conclusions drawn on the basis of such trials will be general in (p.21) nature, and will be formulated by reference to universals in the sense we have outlined in the preceding.

Universals, their extensions, and defined classes are all important for ontological purposes. However, as will be made clearer in the chapters to follow, it is essential that they be carefully distinguished, and that primary importance should be given in ontology construction for purposes of scientific research to the accurate representation of universals.

Relations in Ontologies

The final element in our definition of an ontology is the reference to the relations holding among universals and defined classes. The general idea of a relation is familiar from common sense. A woman typing on a laptop in a Manhattan coffee shop stands in several relations to several other entities, and each one of those other entities is involved in multiple relations to further entities. She is

  • an instance of organism,

  • the daughter of a stockbroker,

  • such as to exemplify the quality of being seated,

  • supported by a chair,

  • adjacent to the counter,

  • located in Manhattan,

  • married to her spouse.

At the same time,

  • her arm is part of her body,

  • her laptop screen is part of her laptop,

  • her latte is colder than that of her neighbor,

and so on, in ever-widening circles.

Similarly, if a bridge collapses immediately after an explosion directly underneath it, then we can assert that the explosion event stands to the bridge-collapse event in the relation of cause to effect, that the explosion event occurred at a certain time, that the bridge-collapse event occurred at a certain later time, and so on.

In building ontologies, however, we are interested not only in relations of these sorts that hold among instances, but also in relations that hold between the corresponding universals, as for example between types of organisms and their typical anatomical parts, between types of events and their temporal locations, between types of events and the types of objects that participate in them, and so forth. For example, it is universally the case that every instance of mammal includes as part some instance of brain.

(p.22) Assertions about such relations form a major part of scientific knowledge. It is one thing to know something about the genus feline; it is a much better thing to know also how the genus feline fits into the larger picture of living things in nature—in particular, what its relation is to other genera, to the associated genes, cells, organs, and habitats. Similarly, it is one thing to understand something about the universal hydrogen; but it is another thing to know how hydrogen is related to other elements, to the types of molecules of which it forms a part, to the behaviors of such molecules in given types of reactions, and so on.

The representation of universals in ontologies involves representation also of the relations in which these universals stand to other universals, and this fact differentiates them from terminologies, conceived as representational artifacts containing lists of lexical entries and descriptions thereof, but which do not render formally explicit the relations holding among the entities referred to by these entries.

Basic Relations

We will deal with relations at length in chapter 7, but for the moment it is useful to distinguish three different kinds of binary relations, which will play an important role in the discussions that follow:

  • relations that hold between two universals;

  • relations that hold between a universal and a particular;

  • relations that hold between two particulars.

Universal-Universal Relations

  1. 1. The paradigm example of a relation that holds between universals is the is_a (meaning “is a subtype of”) relation, as in

protein molecule is_a molecule,

explosion event is_a event,

and so on.

The is_a relation holds among universals in virtue of the fact that universals stand in hierarchies of generality (referred to as “taxonomies,” below). For example, the hierarchy extending from the universal tiger (for example, panthera tigris tigris) through the universals panthera, feliformia, mammalia, chordata, and finally to cellular organism, living thing, and object can be understood as structured from least to most general in terms of the is_a relation, as in table 1.3. Thus, more specific (“child”) universals stand in is_a relation to more general (“parent”) universals.25 (p.23)

Table 1.3 Examples of the is_a relation with a taxonomic hierarchy

  • panthera tigris tigris is_a panthera feliformia

  • panthera feliformia is_a mammalia

  • mammalia is_a chordata

  • chordata is_a cellular organism

  • cellular organism is_an object.

What Is an Ontology?

Universal-Particular Relations

  1. 2. A paradigm example of a relation between a particular and a universal is the instantiates relation, as in Barack Obama instantiates human being, where Barack Obama is the particular flesh and blood entity living in the White House, and human being is the universal. Other examples include

  • these particular stellar cells under the microscope instantiate the universal stellar cell

  • that oak tree on the corner of Main and Elm instantiates the universal oak tree

All particulars stand in the instantiation relation to some universal—in fact, typically to several universals at different levels of generality—but universals themselves do not instantiate anything. However, no particular stands in an is_a relation to any entity. Further examples of relations holding between a particular and a universal include is allergic to (as in John is allergic to penicillin), knows about, is an expert on (as in Mary is an expert on Lepidoptera), and other relations involving mental directedness.

Note that we are following here and in what follows the convention that for relational assertions involving one or more particulars the corresponding relation is picked (p.24) out in bold; for assertions involving only universals we use italics. (These conventions are explained in more detail in chapter 7.)

Particular-Particular Relations

  1. 3. A paradigm example of a relation holding between particulars is the part_of relation. For example, John’s left leg part_of John, this microtubule part_of that cytoskeleton, this transcription part_of that gene expression.

More will be said about relations. For now, what is important is that there are different kinds of relations, some of which hold among universals, and that fully understanding a given scientific domain requires not only knowing what universals exist in that domain, but also what kinds of relations hold between them.

Conclusion

By now, the import of our definition of an ontology—a representational artifact, comprising a taxonomy as proper part, whose representations are intended to designate some combination of universals, defined classes, and certain relations between them—should be clear.

Ontologies are representational artifacts in the sense of being publicly available representations of scientific information about reality. In their role as representations of science, their primary purpose is to represent general features of reality, what we have called universals, and the relations that exist between them. In addition, because defined classes are often useful in science, these too will often be represented in ontologies along with the relations obtaining among them. As we shall see in chapter 8, much current ontology work deals with ontologies as artifacts formulated using the Web Ontology Language (OWL), which allows universals and defined classes to be treated identically as “classes” in the technical sense embraced by OWL. This does not imply, however, that the special role of universals emphasized in the preceding is of no consequence for OWL ontologies. For—as we shall be arguing throughout this work—to build an ontology that is able to serve the purposes of scientific research, it is vital that the ontology is built in such a way as to represent as accurately as possible the universals in the corresponding domains of reality. This holds even when we build ontologies incorporating terms representing defined classes. For in such cases, too, the terms representing the universals used in the definitions of these classes must be included, either in the relevant ontology, or in some neighboring ontology with which it is interoperable. Only in this way can we provide both the authors and the users of the ontology with a coherent view of what its terms refer to. In chapter 2, we will discuss in this light (p.25) both the different kinds of ontologies, and also introduce the notion of a taxonomy and the crucial role that taxonomies play in the structuring of ontologies.

Further Reading on Issues of Epistemological and Ontological Realism

Bibliography references:

Armstrong, David. Universals: An Opinionated Introduction. Boulder, CO: Westview Press, 1989.

Johansson, Ingvar. Ontological Investigations: An Enquiry into the Categories of Nature, Man, and Society. New York: Routledge, 1989.

Lowe, E. J. A Survey of Metaphysics. Oxford: Oxford University Press, 2002.

Lowe, E. J. The Four Category Ontology: A Metaphysical Foundation for Natural Science. Oxford: Oxford University Press, 2006.

Smith, Barry. “Beyond Concepts: Ontology as Reality Representation.” In Formal Ontology in Information Systems: Proceedings of the Fourth International Conference (FOIS 2004), ed. Achille C. Varzi and Laure Vieu, 31–42. Amsterdam: IOS Press, 2004. (p.26)

Notes:

(1.) Taken from Barry Smith, Waclaw Kusnierczyk, Daniel Schober, and Werner Ceusters, “Towards a Reference Terminology for Ontology Research and Development in the Biomedical Domain,” in Proceedings of the 2nd International Workshop on Formal Biomedical Knowledge Representation (KR-MED 2006), vol. 222, ed. Olivier Bodenreider (Baltimore, MD: KR-MED Publications, 2006), 57–66, http://www.informatik.uni-trier.de/~ley/db/conf/krmed/krmed2006.html, accessed December 17, 2014.

(2.) See http://bioportal.bioontology.org/ontologies/MF, accessed August 4, 2014.

(5.) Barry Smith, Werner Ceusters, and Rita Temmerman, “Wüsteria,” Studies in Health Technology and Information 116 (2005): 647–652.

(6.) Christopher G. Chute, “Medical Concept Representation,” in Medical Informatics: Integrated Series in Information Systems, vol. 8, ed. H. Chen, S. S. Fuller, C. Friedman, and W. Hersh (New York: Springer, 2005), 163–182. James J. Cimino, “In Defense of the Desiderata,” Journal of Biomedical Informatics 39, no. 3 (2006): 299–306.

(7.) Stefan Schulz et al., “From Concept Representations to Ontologies: A Paradigm Shift in Health Informatics?” Healthcare Informatics Research 19, no. 4 (2013): 235–242.

(8.) See Ronald J. Brachman and Hector J. Levesque, eds., Readings in Knowledge Representation (San Francisco: Morgan Kaufmann Publishers Inc., 1985).

(9.) And also common-sense beliefs, as in J. R. Hobbs and R. C. Moore, eds., Formal Theories of the Common-Sense World (Norwood, NJ: Ablex, 1985).

(p.191) (10.) G. Van Heijst, A. T. Schreiber, and B. J. Wielinga, “Using Explicit Ontologies in KBS Development,” International Journal of Human–Computer Studies 45 (1996): 183.

(11.) See his “A Translation Approach to Portable Ontologies,” Knowledge Acquisition 5, no. 2 (1992): 199–220. Note that for Gruber himself—though not for many of those who follow in his wake—conceptualizations are to be conceived not as creatures of the mind, but rather as artifacts analogous to software programs.

(12.) For the survey results and discussion, see David Bourget and David Chalmers, eds., “The Phil-Papers Survey,” PhilPapers, n.d., http://philpapers.org/surveys/, accessed August 15, 2014.

(13.) James Franklin, “Stove’s Discovery of the Worst Argument in the World,” Philosophy 77 (2002): 615–624.

(14.) “SNOMED CT,” http://www.ihtsdo.org/snomed-ct/, accessed August 4, 2014.

(15.) International Health Terminology Standards Development Organisation, SNOMED CT® Technical Reference Guide—July 2010 International Release (Washington, DC: College of American Pathologists, 2010).

(16.) “In computer science the expressions up for semantic evaluation do in fact refer very often to things inside the computer—to subroutines that can be called, to memory addresses, to data structures, etc.” Daniel Dennett, Brainchildren: Essays on Designing Minds (Cambridge, MA: MIT Press, 1998), 281.

(17.) See http://lists.hl7.org/read/messages?id=111079, accessed August 15, 2014. This message is no longer accessible at the HL7 site, but is archived here: http://hl7-watch.blogspot.com/2007/09/piece-of-good-news-has-been-posted-on.html.

(18.) “HealthRecordItem Class,” http://msdn.microsoft.com/en-us/library/microsoft.health.healthrecorditem.aspx, accessed August 4, 2014.

(20.) We use “instance” and “particular” as synonyms, the former term being used where we wish to draw out the relation of instantiation between a universal and its particular instances.

(21.) For further background, see Barry Smith and Werner Ceusters, “Ontological Realism: A Methodology for Coordinated Evolution of Scientific Ontologies,” Applied Ontology 5, nos. 3–4 (2010): 139–188.

(23.) A somewhat less radical version of this view, called “resemblance nominalism,” holds that some things are in some sense objectively similar to other things and thereby form circles of similars, with which our general concepts or general words are associated. See, for example, G. Rodriguez-Pereyra, Resemblance Nominalism: A Solution to the Problem of Universals (Oxford: Clarendon Press, 2002).

(p.192) (24.) See Barry Smith and Werner Ceusters, “Strategies for Referent Tracking in Electronic Health Records,” Journal of Biomedical Informatics 39, no. 3 (June 2006): 362–378.

(25.) For purposes of illustration we here treat biological species as though they are themselves universals in the sense described in the text. This is one view of the matter accepted by many biologists. However in contemporary philosophy of biology species are often viewed as complex particular entities consisting of whole current and historical populations. For our purposes here it is not necessary to take a stand on the matter. For a thorough overview of the issues, see Marc Ereshefsky, “Species,” The Stanford Encyclopedia of Philosophy (Spring 2010 edition), ed. Edward N. Zalta, http://plato.stanford.edu/archives/spr2010/entries/species/, accessed August 5, 2014.