Jump to ContentJump to Main Navigation
Actual Causality$

Joseph Y. Halpern

Print publication date: 2016

Print ISBN-13: 9780262035026

Published to MIT Press Scholarship Online: May 2017

DOI: 10.7551/mitpress/9780262035026.001.0001

Show Summary Details
Page of

PRINTED FROM MIT PRESS SCHOLARSHIP ONLINE (www.mitpress.universitypressscholarship.com). (c) Copyright The MIT Press, 2018. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in MITSO for personal use. Subscriber: null; date: 24 October 2019

Applying the Definitions

Applying the Definitions

Chapter:
(p.203) Chapter 8 Applying the Definitions
Source:
Actual Causality
Author(s):

Joseph Y. Halpern

Publisher:
The MIT Press
DOI:10.7551/mitpress/9780262035026.003.0008

Abstract and Keywords

The book concludes with a summary of the key points made, and a discussion of three applications areas from computer science: causality in databases, causality in program verification, and the use of causality in accountability (e.g., determining the cause of a major data breach).

Keywords:   Accountability, Causality in databases, Causality in program verification

Throughout history, people have studied pure science from a desire to understand the universe rather than practical applications for commercial gain. But their discoveries later turned out to have great practical benefits.

Stephen Hawking

In Chapter 1, I said that my goal was to get definitions of notions like causality, responsibility, and blame that matched our natural language usage of these words and were also useful. Now that we are rapidly approaching the end of the book, it seems reasonable to ask where we stand in this project.

First some good news: I hope that I have convinced most of you that the basic approach of using structural equations and defining causality in terms of counterfactuals can deal with many examples, especially once we bring normality into the picture. Moreover, the framework also allows us to define natural notions of responsibility, blame, and explanation. These notions do seem to capture many of the intuitions that people have in a natural way.

Next, some not-so-good news: One thing that has become clear to me in the course of writing this book is that things haven’t stabilized as much as I had hoped they would. Just in the process of writing the book, I developed a new definition of causality (the modified HP definition) and a new approach to dealing with normality (discussed in Section 3.5), and modified the definitions of responsibility, blame, and explanation (see the notes at the end of Chapters 6 and 7). Although I think that the latest definitions hold up quite well, particularly the modified HP definition combined with the alternative notion of normality, given that the modified HP definition is my third attempt at defining causality, and that other researchers continue to introduce new definitions, I certainly cannot be too confident that this is really the last word. Indeed, as I have indicated at various points in the book, there are some subtleties that the current definition does not seem to be capturing quite so well. To my mind, what is most needed is a good definition of agent-relative normality, which takes into account the issues discussed in Example 6.3.4 (where A and B can flip switches to determine whether C is shocked) and meshes well with the definition of causality, but doubtless others will have different concerns.

(p.204) Moreover, when we try to verify experimentally the extent to which the definitions that we give actually measure how people ascribe causality and responsibility, the data become messy. Although the considerations discussed in Chapter 6 (pivotality, normality, and blame assignments) seem to do quite a good job of predicting how people will ascribe causal responsibility at a qualitative level, because all these factors (and perhaps others) affect people’s causality and responsibility judgments, it seems that it will be hard to design a clean theory that completely characterizes exactly how people acribe causality, responsibility, and blame at a more quantitative level.

So where does this leave us? I do not believe that there is one “true” definition of causality. We use the word in many different, but related, ways. It is unreasonable to expect one definition to capture them all. Moreover, there are a number of closely related notions—causality, blame, responsibility, intention—that clearly are often confounded. Although we can try to disentangle them at a theoretical level, people clearly do not always do so.

That said, I believe that it is important and useful to have precise formal definitions. To take an obvious example, legal judgments depend on causal judgments. A jury will award a large settlement to a patient if it believes that the patient’s doctor was responsible for an inappropriate outcome. Although we might disagree on whether and the extent to which the doctor was responsible, the disagreement should be due to the relevant facts in the case, not a disagreement about what causality and responsibility mean. Even if people confound notions like causality and responsibility, it is useful to get definitions that distinguish them (and perhaps other notions as well), so we can be clear about what we are discussing.

Although the definition(s) may not be able to handle all the subtleties, that is not necessary for them to be useful. I have discussed a number of different approaches here, and the jury is still out on which is best. The fact that the approaches all give the same answers in quite a few cases makes me even more optimistic about the general project.

I conclude the book with a brief discussion of three applications of causality related to computer science. These have the advantage that we have a relatively clear idea of what counts as a “good” definition. The modeling problem is also somewhat simpler; it is relatively clear exactly what the exogenous and endogenous variables should be, and what their ranges are, so some of the modeling issues encountered in Chapter 4 no longer arise.

8.1 Accountability

The dominant approach to dealing with information security has been preventative: we try to prevent someone from accessing confidential data or connecting to a private network. More recently, there has been a focus on accountability: when a problem occurs, it should be possible after the fact to determine the cause of the problem (and then punish the perpetrators appropriately). Of course, dealing with accountability requires that we have good notions of causality and responsibility.

The typical assumption in the accountability setting, at least for computer science applications, is that we have logs that completely describe exactly what happened; there is no uncertainty about what happened. Moreover, in the background, there is a programming language. An adversary writes a program in some pre-specified programming language that interacts with the system. We typically understand the effect of changing various lines of code. The (p.205) variables in the program can be taken to be the endogenous variables in the causal model. The programs used and the initial values of program variables are determined by the context. Changing some lines of code becomes an intervention. The semantics of the program in the language determines the causal model and the effect of interventions.

The HP definitions of causality and responsibility fit this application quite well. Of course, if some logs are missing, we can then move to a setting where there is an epistemic state, with a probability on contexts; the contexts of interest are the ones that would generate the portion of the log that is seen. If we are interested in a setting where there are several agents working together to produce some outcome, blame becomes even more relevant. Agents may have (possibly incorrect) beliefs about the programs that other agents are using. Issues of which epistemic state to use (the one that the agent actually had or the one that the agent should have had) become relevant. But the general framework of structural equations and the definitions provided here should work well.

8.2 Causality in Databases

A database is viewed as consisting of a large collection of tuples. A typical tuple may, for example, give an employee’s name, manager, gender, address, social security number, salary, and job title. A standard query in a database may have the form: tell me all employees who are programmer analysts, earn a salary of less than $100,000, and work for a manager earning a salary greater than $150,000. Causality in databases aims to answer the question: Given a query and a particular output of the query, which tuples in the database caused that output? This is of interest because we may be, for example, interested in explaining unexpected answers to a query. Tim Burton is a director well known for fantasy movies involving dark Gothic schemes, such as “Edward Scissorhands” and “Beetlejuice”. A user wishing to learn more about his work might query the IMDB movie database (www.imdb.com) about the genres of movies that someone named Burton directed. He might be surprised (and perhaps a little suspicious of the answer) when he learns that one genre is “Music and Musical”. He would then want to know which tuple (i.e., essentially, which movie) caused that output. He could then check whether there is an error in the database, or perhaps other directors named Burton who directed musicals. It turns out that Tim Burton did direct a musical (“Sweeney Todd”); there are also two other directors named Burton (David Burton and Humphrey Burton) who directed other musicals.

Another application of the causality definition is for diagnosing network failures. Consider a computer consisting of a number of servers, with physical links between them. Suppose that the database contains tuples of the form (Link(x , y), Active(x ), Active(y)). Given a server x, Active(x) is either 0 or 1, depending on whether x is active; similarly Link(x, y) is either 0 or 1 depending on whether the link between x and y is up. Servers x and y are connected if there is a path of links between them, all of which are up, such that all servers on the path are active. Suppose that at some point the network administrator observes that servers x0 and y0 are no longer connected. This means that the query Connected (x0, y0) will return “false”. The causality query “Why did Connected(x0, y0) return ‘false’?” will return all the tuples (Link(x , y), Active(x ), Active(y)) that are on some path from x0 to y0 such that (p.206) either server x is not active, server y is not active, or the link between them is down. These are the potential causes of Connected (x0, y0) returning “false”.

I now briefly sketch how this is captured formally. As I said, a database D is taken to be a collection of tuples. For each tuple t, there is a binary variable Xt, which is 0 if the tuple t is not in the database of interest and 1 if it is. We are interested in causal models where the context determines the database. That is, the context determines which variables Xt are 1 and which are 0; there are no interesting structural equations. There is assumed to be a query language LQ that is richer than the language L(S) introduced in Section 2.2.1. Each formula φ‎ ∈ LQ has a free variable x that ranges over tuples. There is a relation ⊨ such that, for each tuple t and formula φ‎ ∈ LQ, either D ⊨ φ‎(t) or D¬φ‎(t) (but not both). If D ⊨ φ‎(t), then t is said to satisfy the query φ‎ in database D. In response to a query φ‎, a database D returns all the tuples t such that D ⊨ φ‎(t).

We can easily build a causal language based on LQ in the spirit of L(S). That is, instead of starting with Boolean combinations of formulas of the form X = x, we start with formulas of the form φ‎(t), where φ‎ ∈ LQ and t is a tuple in addition to formulas of the form Xt = i for i ∈ {0, 1}. If M is a causal model for databases, since each causal setting ( M,  u ) determines a database, ( M,  u ) ⊨ φ‎(t) exactly if D ⊨ φ‎(t), where D is the database determined by ( M,  u ) and ( M,  u ) Xt = 1 exactly if t ∈ D. We can then define [ X x ]φ just as in Section 2.2.1.

Of course, for causality queries to be truly useful, it must be possible to compute them efficiently. To do this, a slight variant of the updated HP definition has been considered. Say that tuple t is a cause of the answer tʹ for φ‎ if

D1. ( M,  u ) ⊨ (Xt = 1) ∧ φ‎(tʹ) (so t ∈ D and D ⊨ φ‎(tʹ), where D is the database determined by ( M,  u ) ); and

D2. there exists a (possibly empty) set T of tuples contained in D such that t /∈ T, D − T |= φ‎(tʹ), and D − (T ∪ {t}) ⊨ ¬φ‎(tʹ).

If the tuple t is a cause of the answer tʹ for φ‎ in ( M,  u ) , then Xt = 1 is a cause of φ‎(tʹ) in ( M,  u ) . D1 is just AC1. Translating to the kinds of formulas that we have been using all along, and taking X T to consist of all the variables X tʹʹ such that tʹʹ ∈ T, the second clause says: (M,  u )[ X T 0 ]φ( t ) and (M,  u )[ X T 0 ,  X t 0 ]¬φ( t ) . Thus, it is saying that ( X T , 0 ,0 ) is a witness to the fact that Xt = 1 is a cause of φ‎(tʹ) according to the original HP definition; AC2(bo) holds. Note that AC3 is satisfied vacuously.

This does not necessarily make Xt = 0 a cause of φ‎(tʹ) according to the updated HP definition. There may be a subset tʹ of T such that (M,  u )[ X T 0 ]¬φ( t ) , in which case AC2(bu) would fail. Things are better if we restrict to monotone queries, that is, queries ψ‎ such that for all tʹ, if D ⊨ ψ‎(tʹ) and D ⊆ Dʹ, then Dʹ |= ψ‎(tʹ) (in words, if ψ‎(tʹ) holds for a small database, then it is guaranteed to hold for larger databases, with more tuples). Monotone queries arise often in practice, so proving results for monotone queries is of practical interest. If φ‎ is a monotone query, then AC2(bu) holds as well: if (M,  u )[ X T 0 ]φ( t ) , then for all subsets Tʹ of T, we must have (M,  u )[ X T 0 ]φ( t ) (because (M,  u )[ X T 0 ]φ( t ) iff D − tʹ |= φ‎(tʹ), and if Tʹ ⊆ T, then D − T ⊆ D − Tʹ, so we can appeal to monotonicity). It is also easy to see that for a monotone query, this definition also guarantees that Xt = 1 is part of a cause of φ‎(tʹ) according to the modified HP definition.

(p.207) The second clause has some of the flavor of responsibility; we are finding a set such that removing this set from D results in t being critical for φ‎(tʹ). Indeed, we can define a notion of responsibility in databases by saying that tuple t has degree of responsibility 1/(k + 1) for the answer tʹ for φ‎ if k is the size of the smallest set T for which the definition above holds.

Notice that I said “if t is a cause of the answer tʹ for φ‎ in ( M,  u ) , then Xt = 1 is a cause of φ‎(tʹ) in ( M,  u ) ” I did not say “if and only if”. The converse is not necessarily true. The only witnesses ( W ,  w , 0 ) that this definition allows are ones where w = 0 . That is, in determining whether the counterfactual clause holds, we can consider only witnesses that involve removing tuples from the database, not witnesses that might add tuples to the database. This can be captured in terms of a normality ordering that makes it abnormal to add tuples to the database but not to remove them. However, the motivation for this restriction is not normality as discussed in Chapter 3, but computational. For causality queries to be of interest, it must be possible to compute the answers quickly, even in huge databases, with millions of tuples. This restriction makes the computation problem much simpler. Indeed, it is known that for conjunctive queries (i.e., queries that can be written as conjunctions of basic queries about components of tuples in databases—I omit the formal definitions here), computing whether t is an actual cause of φ‎ can be done in time polynomial in the size of the database, which makes it quite feasible. Importantly, conjunctive queries (which are guaranteed to be monotone) arise frequently in practice.

8.3 Program Verification

In model checking, the goal is to check whether a program satisfies a certain specification. When a model checker says that a program does not satisfy its specification, it typically provides in addition a counterexample that demonstrates the failure of the specification. Such a counterexample can be quite useful in helping the programmer understand what went wrong and in correcting the program. In many cases, however, understanding the counterexample can be quite challenging.

The problem is that counterexamples can be rather complicated objects. Specifications typically talk about program behavior over time, saying things like “eventually φ‎ will happen”, “ψ‎ will never happen’, and “property φ‎ʹ will hold at least until ψ‎ʹ does”. A counterexample is a path of the program; that is, roughly speaking, a possibly infinite sequence of tuples (x1, … , xn), where each tuple describes the values of the variables X1, … , Xn. Trying to understand from the path why the program did not satisfy its specification may not be so simple. As I now show, having a formal notion of causality can help in this regard.

Formally, we associate with each program a Kripke structure, a graph with nodes and directed edges. Each node is labeled with a possible state of the program. If X1, … , Xn are the variables in the program, then a program state just describes the values of these variables. For simplicity in this discussion, I assume that X1, … , Xn are all binary variables, although it is straightforward to extend the approach to non-binary variables (as long as they have finite range). The edges in the Kripke structure describe possible transitions of the program. That is, there is an edge from node v1 to node v2 if the program can make a transition from the program state labeling v1 to that labeling v2. The programs we are interested in are typically concurrent programs, which are best thought of as a collection of possibly communicating (p.208) programs, each of which is run by a different agent in the system. (We can think of the order of moves as being determined exogenously.) This means that there may be several possible transitions from a given node. For example, at a node where X and Y both have value 0, one agent’s program may flip the value of X and another agent’s program may flip the value of Y . If these are the only variables, then there would be a transition from the node labeled (0, 0) to nodes labeled (1, 0) and (0, 1).

A path π‎ = (s0, s1, s2, ) in M is a sequence of states such that each state sj is associated with a node in M and for all i, there is a directed edge from the node associated with si to the node associated with s i+1 . Although the use of the word “state” here is deliberately intended to suggest “program state”, technically they are different. Although two distinct states si and sj on a path might be associated with the same node (and thus the same program state), it is useful to think of them as being different so that we can talk about different occurrences of the same program state on a path. That said, I often blur the distinction between state and program state.

Kripke structures have frequently been used as models of modal logics, logics that extend propositional or first-order logic by including modal operators such as operators for belief or knowledge. For example, in a modal logic of knowledge, we can make statements like “Alice knows that Bob knows that p is true”. Temporal logic, which includes operators like “eventually”, “always”, and “until”, has been found to be a useful tool for specifying properties of programs. For example, the temporal logic formula □X says that X is always true (i.e., □X is true of a path π‎ if X = 1 at every state in π‎). All the specifications mentioned above (“eventually φ‎ will happen”, “ψ‎ will never happen’, and “property φ‎ʹ will hold at least until ψ‎ʹ does”) can be described in temporal logic and evaluated on the paths of a Kripke structure. That is, there is a relation ⊨ such that, given a Kripke structure M, a path π‎, and a temporal logic formula φ‎, either (M,π‎) ⊨ φ‎ or (M,π‎) ⊨ ¬φ‎, but not both. The details of temporal logic and the ⊨ relation are not necessary for the following discussion. All that we need is that (M,(s0, s1, s2, )) ⊨ □φ‎, where φ‎ is a propositional formula, if φ‎ is true at each state si.

A model checker may show that a program characterized by a Kripke structure M fails to satisfy a particular specification φ‎ by returning (a compact description of) a possibly infinite path π‎ in M that satisfies ¬φ‎. The path π‎ provides a counterexample to M satisfying φ‎. As I said, although having such a counterexample is helpful, a programmer may want a deeper understanding of why the program failed to satisfy the specification. This is where causality comes in.

Beer, Ben-David, Chocker, Orni, and Trefler give a definition of causality in Kripke structures in the spirit of the HP definition. Specifically, they define what it means for the fact that X = x in the state si on a path π‎ to be a cause of the temporal logic formula φ‎ not holding in path π‎. (As an aside, rather than considering the value of the variable X in state s, for each state s and variable X in the program, we could have a variable Xs, and just ask whether Xs = x is a cause of φ‎. Conceptually, it seems more useful to think of the same variable as having different values in different states, rather than having many different variables Xs.) As with the notion of causality in databases, the language for describing what can be caused is richer than the language L(S), but the basic intuitions behind the definition of causality are unchanged.

(p.209) To formalize things, I need a few more definitions. Since we are dealing with binary variables, in a temporal logic formula, I write X rather than X = 1 and ¬X rather than X = 0. For example, I write X ∧ ¬Y rather than X = 1 ∧ Y = 0. Define the polarity of an occurrence of a variable X in a formula φ‎ to be positive if X occurs in the scope of an even number of negations and negative if X occurs in the scope of an odd number of negations. Thus, for example, the first occurrence of X in a temporal formula such as ¬□(¬X ∧ Y )∨¬X has positive polarity, the only occurrence of Y has negative polarity, and the second occurrence of X has negative polarity. (The temporal operators are ignored when computing the polarity of a variable.)

A pair (s, X) consisting of a state s and a variable X is potentially helpful for φ‎ if either there exists at least one occurrence of X in φ‎ that has positive polarity and X = 0 in s or there exists at least one occurrence of X in φ‎ that has negative polarity and X = 1 in s. It is not hard to show that if (s, X) is not potentially helpful for φ‎, then changing the value of X in s cannot change the truth value of φ‎ from false to true in π‎. (This claim is stated more formally below.)

Given distinct pairs (s1, X1), , (sk, Xk) and a path π‎, let π‎[(s1, X1), , (sk, Xk)] be the path that is just like π‎ except that the value of Xi in state si is flipped, for i = 1, … , k. (We can think of π‎[(s1, X1), , (sk, Xk)] as the result of intervening on π‎ so as to flip the values of the variable X i in state i, for i = 1, … , k.) Thus, the previous observation says that if (s, X) is not potentially helpful for φ‎ and (M,π‎) ⊨ ¬φ‎, then (M,π‎[(s, X)]) ⊨ ¬φ‎.

Suppose that π‎ is a counterexample to φ‎. A finite prefix ρ‎ of π‎ is said to be a counterexample to φ‎ if (M,π‎ʹ) ⊨ ¬φ‎ for all paths π‎ʹ that extend ρ‎. Note that even if π‎ is a counterexample to φ‎, there may not be any finite prefix of π‎ that is a counterexample to φ‎. For example, if φ‎ is the formula ¬□(X = 0), which says that X is not always 0, so eventually X = 1, a counterexample π‎ is a path such that X = 0 at every state on the path. However, for every finite prefix ρ‎ of π‎, there is an extension π‎ʹ of ρ‎ that includes a state where X = 1, so ρ‎ is not a counterexample.

A prefix ρ‎ of π‎ is said to be a minimal counterexample to φ‎ if it is a counterexample to φ‎ and there is no shorter prefix that is a counterexample to φ‎. If no finite prefix of π‎ is a counterexample to φ‎, then π‎ itself is a minimal counterexample to φ‎.

We are finally ready for the definition of causality.

Definition 8.3.1

If π‎ is a counterexample to φ‎, then (s, X) (intuitively, the value of X in state s) is a cause of the first failure of φ‎ on π‎ if

Co1. there is a prefix ρ‎ of π‎ (which could be π‎ itself) such that ρ‎ is a minimal counterexample to φ‎ and s is a state on ρ‎; and

Co2. there exist distinct pairs (s1, X1), , (sk, Xk) potentially helpful for φ‎ such that ( M, ρ[ ( s 1 ,  X 1 ),,( s k , X k ) ] )¬φ and ( M, ρ[ ( s, X ),( s 1 , X 1 ),( s k , X k ) ] )φ .

Condition Co2 inDefinition 8.3.1 is essentially a combination of AC2(a) and A2(bo), if we assume that the value of all variables is determined exogenously. Clearly the analogue of AC2(a) holds: changing the value of X in state s flips φ‎ from true to false, under the (p.210) contingency that the value of Xi is flipped in state si, for i = 1, … , k. Moreover, AC2(bo) holds, since leaving X at its value in state s while flipping X1, … , Xk keeps φ‎ false.

To get an analogue of AC2(bu), we need a further requirement:

Co3. ( M,ρ[ ( s 1 , X j1 ),,( s k , X j k ) ] )¬φ if { X j1 ,, X j k }{ X 1 ,, X k }

Finally, to get an analogue of AC2(am), we need yet another condition:

Co4. ( M,ρ[ ( s,X ),( s 1 , X j1 ),,( s k , X j k ) ] )¬φ if { X j1 ,, X j k }{ X 1 ,, X k }

I use strict subset in Co4 since Co2 requires that (M,ρ‎[(s, X), (s1, X1), , (sk, Xk)]) ⊨ φ‎. If Co2–4 hold, then the values of X, X1, … , Xk in state s together form a cause of φ‎ according to the modified HP definition, and the value of X is part of a cause of φ‎.

Condition Co1 inDefinition 8.3.1 has no analogue in the HP definition of actual causality (in part, because there is no notion of time in the basic HP framework). It was added here because knowing about the first failure of φ‎ seems to be of particular interest to programmers. Also, focusing on one failure reduces the set of causes, which makes it easier for programmers to understand an explanation. Of course, the definition would make perfect sense without this clause.

Example 8.3.2

Suppose that (the program state labeling) node v satisfies X = 1 and node vʹ satisfies X = 0. Assume that v and vʹ are completely connected in the Kripke structure: there is an edge from v to itself, from v to vʹ, from vʹ to itself, and from vʹ to v. Consider the path π‎ = (s0, s1, s2, ), where all states on the path are associated with v except s2, which is associated with vʹ. Clearly (M,π‎) ⊨ ¬□X. Indeed, if ρ‎ = (s0, s1, s2), then □X already fails in ρ‎. Note that (s2, X) is potentially helpful for □X (since X has positive polarity in □X and X = 0 in s2). It is easy to check that (s2, X) is a cause of the first failure of □X in π‎.

Now consider the formula ¬□X. The path π‎ʹ = (sʹ0, sʹ1, sʹ2, ) where each state sʹj is associated with v is a counterexample to ¬□X, but no finite prefix of π‎ʹ is a counterexample; it is always possible to extend a finite prefix so as to make ¬□X true (by making X = 0 at some state in the extension). The value of X in each of the states on π‎ʹ is a cause of ¬□X failing.

Beer, Ben-David, Chocker, Orni, and Trefler built a tool that implemented these ideas; they report that programmers found it quite helpful.

Just as in the database case, the complexity of computing causality becomes significant in this application. In general, the complexity of computing causality in counterexamples is NP-complete. However, given a finite path ρ‎ and a specification φ‎ that fails in all paths extending ρ‎, there is an algorithm that runs in time polynomial in the length of ρ‎ and φ‎ (viewed as a string of symbols) that produces a superset of the causes of φ‎ failing in ρ‎. Moreover, in many cases of practical interest, the set produced by the algorithm is precisely the set of causes. This algorithm is actually the one implemented in the tool.

Causality also turns out to be useful even if a program does satisfy its specification. In that case, a model checker typically provides no further information. The implicit assumption was that if the model checker says that a program satisfies its specification, then programmers are happy to leave well enough alone; they feel no need to further analyze the program. However, (p.211) there has been growing awareness that further analysis may be necessary even if a model checker reports that a program satisfies its specification. The concern is that the reason that the program satisfies its specification is due to an error in the specification itself (the specification may not cover an important case). One way to minimize this concern is to try to understand what causes the program to satisfy its specification. And one useful way of formalizing this turns out to be to examine the degree of responsibility that the value of variable X in state s (i.e., a pair (s, X), in the notation of the earlier discussion) has for the program satisfying the specification. If there are states s such that (s, X) has a low degree of responsibility for a specification φ‎ for all variables X, this suggests that s is redundant. Similarly, if there is a variable X such that (s, X) has a low degree of responsibility for φ‎ for all states s, this suggests that X is redundant. In either case, we have a hint that there may be a problem with the specification: The programmer felt that state s (or variable X) was important, but it is not carrying its weight in terms of satisfying the specification. Understanding why this is the case may uncover a problem.

Again, while the general problem of computing degree of responsibility in this context is hard, there are tractable special cases of interest. It is necessary to identify these tractable cases and argue that they are relevant in practice for this approach to be considered worthwhile.

8.4 Last Words

These examples have a number of features that are worth stressing.

  • The language used is determined by the problem in a straightforward way, as is the causal model. That means that many of the subtle modeling issues discussed in Chapter 4 do not apply. Moreover, adding variables so as to convert a cause according to the updated HP definition to a cause according to the original HP definition, as in Section 4.3, seems less appropriate. There is also a clear notion of normality when it comes to accountability (following the prescribed program) and model checking (satisfying the specification). In the database context, normality plays a smaller role, although, as I noted earlier, the restriction on the witnesses considered can be viewed as saying that it is abnormal to add tuples to the database. More generally, it may make sense to view removing one tuple from the database as less normal than removing a different tuple.

  • In Chapter 5, I raised computational concerns on the grounds that if causality is hard to compute and structural-equations models are hard to represent, then it seems implausible that this is how people evaluate causality. The examples that people work with are typically quite small. In the applications discussed in the chapter, the causal models may have thousands of variables (or even millions, in the case of databases). Now computational concerns are a major issue. Programs provide a compact way to represent many structural equations, so they are useful not only conceptually, but to deal with concerns regarding compact representations. And, as I have already observed, finding special cases that are of interest and easy to compute becomes more important.

  • (p.212) It is certainly important in all these examples that responses to queries regarding causality agree with people’s intuitions. Fortunately, for all the examples considered in the literature, they do. The philosophy literature on causality has focused on (arguably, obsessed over) finding counterexamples to various proposed definitions; a definition is judged on how well it handles various examples. The focus of the applications papers is quite different. Finding some examples where the definitions do not give intuitive answers in these applications is not necessarily a major problem; the definitions can still be quite useful. In the database and program verification applications, utility is the major criterion for judging the approach. Moreover, even if a database system provides unintuitive answers, there is always the possibility of posing further queries to help clarify what is going on. Of course, in the case of accountability, if legal judgments are going to be based on what the system says, then we must be careful. But in many cases, having a transparent, well-understood definition that gives predictable answers that mainly match intuitions is far better than the current approach, where we use poorly defined notions of causality and get inconsistent treatment in courts.

The applications in this chapter illustrate just how useful a good account of causality can be. Although the HP definition needed to be tweaked somewhat in these applications (both because of the richer languages involved and to decrease the complexity of determining causality), these examples, as well as the many other examples in the book, make me confident that the HP definition, augmented by a normality ordering if necessary, provides a powerful basis for a useful account of causality. Clearly more work can and should be done on refining and extending the definitions of causality, responsibility, and explanation. But I suspect that looking at the applications will suggest further research questions quite different from the traditional questions that researchers have focused on thus far. I fully expect that the coming years will give us further applications and a deeper understanding of the issues involved.

Notes

Lombrozo [2010] provides further evidence regarding the plethora of factors that affect causality ascriptions.

Feigenbaum, Jaggard, and Wright [2011] point out the role of causality in accountability ascriptions and suggest using the HP definition. Datta et al. [2015] explore the role of causality in accountability in greater detail. Although their approach to causality involves counterfactuals, there are some significant differences from the HP definition. Among other things, they use sufficient causality rather than what I have been calling “actual causality”.

Much of the material on the use of causality in databases is taken from the work of Meliou, Gatterbauer, Halpern, Koch, Moore, and Suciu [2010]. The complexity results for queries mentioned in the discussion (as well as other related results) are proved by Meliou, Gatterbauer, Moore, and Suciu [2010].

As I said in Section 8.3, Beer et. al. [2012] examined the use of causality in explaining counterexamples to the specification of programs. The causality computation was implemented in a counterexample explanation tool for the IBM hardware model checker RuleBase (p.213) [Beer et al. 2012]. Programmers found the tool tremendously helpful. In fact, when it was temporarily disabled between releases, users called and complained about its absence [Chockler 2015].

Chockler, Halpern, and Kupferman [2008] considered the use of causality and responsibility to check whether specifications were appropriate, or if the programmer was trying to satisfy a specification different from the one that the model checker was checking. They provided a number of tractable cases of the problem.

Chockler, Grumberg, and Yadgar [2008] consider another application of responsibility (in the spirit of the HP definition) to model checking: refinement. Because programs can be so large and complicated, model checking can take a long time. One way to reduce the runtime is to start with a coarse representation of the program, where one event represents a family of events. Model checking a coarse representation is much faster than model checking a refined representation, but it may not produce a definitive answer. The representation is then refined until a definitive answer is obtained. Chockler, Grumberg, and Yadgar [2008] suggested that the way that the refinement process works could be guided by responsibility considerations. Roughly speaking, the idea is to refine events that are most likely to be responsible for the outcome. Again, they implemented their ideas in a model-checking tool and showed that it performed very well in practice.

Groce [2005] also used a notion of causality based on counterfactuals to explain errors in programs, although his approach was not based on the HP definitions. (p.214)