Let me just say that it is very gratifying to see a philosopher give the problem of causality some serious attention. Moreover, you discuss the concept as it used in contemporary social sciences. I have bothered by the fact that all too many social scientist try to avoid saying "cause" when that is clearly what they mean to say. Thank you!
I have not finished your book, but I cannot resist making one point to you. In 5.4, you discuss the meaning of structural coefficients, but you spend a good deal of time discussing the meaning of epsilon or e. It seems to me that e has a very straight-forward meaning in SEM. If the true equation for y is
We are in perfect agreement on the error terms. If we choose to represent the equation for y with just one independent variable, say
This explanation is not "mysterious" to you and me, but it is mysterious to many researchers who confuse this interpretation with the error terms in regression analysis, which are orthogonal to x by definition (see Section 5.1.2 and footnote 25, p. 244). It is also mysterious to a casual reader of the current SEM literature, where the distinction is not made clear, and where the orthogonality condition is imposed either as a matter of mathematical convenience or, worse yet, "for identification purposes" (rather than by substantive considerations.) This concept of making "identifying assumptions", which seems to have loomed from and nurtured in the econometrics literature, has always been a mystery to me. How can one hope to get something useful from a model that is distorted for purposes of identification, and thus may conflict with one's perception of reality??? On page 163 I argue that the interpretation of e as a summary of omitted factors should help an investigator sharpen his/her perception of reality, and critically evaluate if it makes sense to assume that e and x are orthogonal.
Next discussion (Markus: Reversing Statistical Time (Chapter 2, p. 58-59))