We are all taught that simply
showing that two variables are correlated is not enough to demonstrate that a
change in one results in a change in another. What is the thought process we
must go through that allows us to make this leap in logic? Where in the
definition of conditional probability is there a provision that allows us to
account for an intervention rather than the purely observable? What must we
experience before we are able to say that ``action A'' results in ``outcome B''? Epidemiologists must be doing this everyday when we hear
about a new report showing that some food or activity, previously enjoyed by
millions, is now viewed as harmful and should be avoided. When a physicist
writes that force is equal to mass times acceleration or, symbolically, f =
ma, how can we tell from this equation that it is the force that causes
acceleration and not mass causing the force?
Causality develops a
formal mathematical language for making such inference and that mathematical
language is profound. The depth is not in the sense of dense equations but
rather more of a philosophical search for a whole different level of
understanding, similar to what one experiences reading L. J. Savage's (1954) Foundations
of Statistics. This volume is located at the confluence of the fields of
statistics, economics, engineering, epidemiology, philosophy, social science,
and artificial intelligence. All of these areas have had their own special needs
but also have a common use for many of the ideas presented here.
The statistician who writes E
(Y | X) will usually think of conditional expectation in a
different sense than the economist who thinks of X as an intervention
such as a change in taxes or interest rates by a government fiat. To make the
distinction, Causality describes the do (
) operator as in do (X) to signify intervention and as in E
(Y | do (X)) to indicate the consequences of some action
taken. The do (
) calculus has its own set of axioms for probability and expectation.
Already we can see that this book is going places few of us have been.
As with any fresh insights and developments, these are not without their controversy. Consider Section 6.2 entitled ``Why There Is no Statistical Test for Confounding, Why Many Think There Is, and Why They Are Almost Right.'' Here we read about others' approaches to the concept of confounding and several footnotes point out their inadequacies. Throughout the book, more generally, these footnotes often appear as historical commentary praising the good and skewering the bad ideas that have been offered by others over the years. An excellent historical overview, beginning in the Garden of Eden on through Galton, Pearson, and Fisher, appears as a reprinted lecture at the end of the book. Even these revered names from the origins of statistics are not spared a certain degree of criticism.
The questions the book
raises and our ability to adequately address these should be cause for some
concern in our profession. Statisticians
will need to acknowledge a number of major inadequacies of the discipline.
We have left these questions unanswered for too long.
Here is an important book that will be discussed for many years to come.
TECHNOMETRICS, MAY 2001, VOL. 43, NO. 2