Causality: Difference between revisions

From Glossary LIVES
Jump to navigation Jump to search
No edit summary
No edit summary
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Causality (here used synonymously with causation, cause and effect) has been a major scope of science for millennia, and any discussion of causality cannot be summarized by a couple of pages in a general glossary. The purpose here is to discuss how this concept may be understood and estimated differently in some disciplines studying “vulnerability” within LIVES.<br><br>
Causality (here used synonymously with causation, cause and effect) has been a major scope of science for millennia, and any discussion of causality cannot be summarized by a couple of pages in a general glossary. The purpose here is to discuss how this concept may be understood and estimated differently in some disciplines studying “[[Vulnerability|vulnerability]]” within LIVES.<br><br>
Broad philosophical discussions of causality include revealing underlying mechanisms behind observed or implied associations (e.g., between stimuli and responses, about genetics and epigenetics in human characteristics) and opposing free will and consciousness (e.g., nature vs. nurture in psychology, agency vs. structure in sociology). In modern science, one of the simplest definitions of causality is the relation of determination between a cause (entity A) and a consequence (entity B), such that “if A then B” and “if not A then not B.” Counterfactual arguments (“if A had not occurred, then B would not have occurred”) complemented the previous regularity definition (Lewis, 1973), but have never come to totally replace it. It is commonly agreed that the association be time-ordered, such that A occurs before B in time.<br><br>
Broad philosophical discussions of causality include revealing underlying mechanisms behind observed or implied associations (e.g., between stimuli and responses, about genetics and epigenetics in human characteristics) and opposing free will and consciousness (e.g., nature vs. nurture in psychology, agency vs. structure in sociology). In modern science, one of the simplest definitions of causality is the relation of determination between a cause (entity A) and a consequence (entity B), such that “if A then B” and “if not A then not B.” Counterfactual arguments (“if A had not occurred, then B would not have occurred”) complemented the previous regularity definition (Lewis, 1973), but have never come to totally replace it. It is commonly agreed that the association be time-ordered, such that A occurs before B in time.<br><br>
Statistics has primarily focused on measuring and quantitatively estimating the effects of causes on their consequences, by following Hume’s analysis of causation and Popper’s principles of verification and falsifiability (Holland, 1986). This focus has percolated somewhat differently in different disciplines, ranging from simple linear regression models to nonlinear dynamic systems of multivariate equations. From a methodological perspective, most agree with the assertion that carefully planned double blind, randomized control trials (RCT) are the simplest and safest setting for estimating causality relations. Of course, in many research settings RCTs are simply not possible, thus forcing scholars to creatively and intelligently propose alternative settings that at best approximate the at-times unachievable and impossible standard of excellence represented by experimental settings (which, strictly speaking, are also imperfect, in that it is impossible to measure the same unit of observation under both the presence (e.g., treatment) and the absence (e.g., control) of the cause at the exact same time; this problem can however be dealt with by assuming untestable assumptions, such as time invariance of exposure effects. Here, again, counterfactual arguments become highly relevant).<br><br>
Statistics has primarily focused on measuring and quantitatively estimating the effects of causes on their consequences, by following Hume’s analysis of causation and Popper’s principles of verification and falsifiability (Holland, 1986). This focus has percolated somewhat differently in different disciplines, ranging from simple linear regression models to nonlinear dynamic systems of multivariate equations. From a methodological perspective, most agree with the assertion that carefully planned double blind, randomized control trials (RCT) are the simplest and safest setting for estimating causality relations. Of course, in many research settings RCTs are simply not possible, thus forcing scholars to creatively and intelligently propose alternative settings that at best approximate the at-times unachievable and impossible standard of excellence represented by experimental settings (which, strictly speaking, are also imperfect, in that it is impossible to measure the same unit of observation under both the presence (e.g., treatment) and the absence (e.g., control) of the cause at the exact same time; this problem can however be dealt with by assuming untestable assumptions, such as time invariance of exposure effects. Here, again, counterfactual arguments become highly relevant).<br><br>
In economics, the work by Granger (1969) has set a useful framework for estimating causal inference (Heckman, 2008; Hoover, 2006). Put in very simple terms, the Granger causality test allows assessing whether one variable assessed earlier can forecast another assessed later, where both variables are measured by time series. Given the historical importance of time series data in economic research, this definition has served well to establishing causality relations. Nowadays, discussions about the possibility of estimating causality in economics do not revolve around time series analyses, but are centered on ingenious methods that are applicable to observational studies, and that ought to be used, whenever possible, in conjunction with RCTs. These include, but are not limited to, linear and discontinuity regression designs, differences-in-differences methods, use of instrumental variables, and propensity scores and other matching techniques, whose use has thrived in recent years (Angrist & Pischke, 2009; 2010). However, the debate around the utility of such methods in discussing causality is not resolved and remains well alive today (e.g., Banerjee, Duflo, & Kremer, 2016; Deaton & Cartwright, 2018).<br><br>
In economics, the work by Granger (1969) has set a useful framework for estimating causal inference (Heckman, 2008; Hoover, 2006). Put in very simple terms, the Granger causality test allows assessing whether one variable assessed earlier can forecast another assessed later, where both variables are measured by time series. Given the historical importance of time series data in economic research, this definition has served well to establishing causality relations. Nowadays, discussions about the possibility of estimating causality in economics do not revolve around time series analyses, but are centered on ingenious methods that are applicable to observational studies, and that ought to be used, whenever possible, in conjunction with RCTs. These include, but are not limited to, linear and discontinuity regression designs, differences-in-differences methods, use of instrumental variables, and propensity scores and other matching techniques, whose use has thrived in recent years (Angrist & Pischke, 2009; 2010). However, the debate around the utility of such methods in discussing causality is not resolved and remains well alive today (e.g., Banerjee, Duflo, & Kremer, 2016; Deaton & Cartwright, 2018).<br><br>
In psychology, the golden standards allowing estimating causality effects are forms of RCT, operationalized as carefully planned experiments, where the effects of a manipulated independent variable on a dependent variable are assessed, under strict methodological control. This setting requires that the researcher consider several psychometric issues, such as internal and external validity, construct validity, and generalizability. Again, the ideal RCT setting is a notoriously difficult methodological design to implement in many research settings (for practical or ethical reasons), so that psychologists also must very often rely on observational (a.k.a. as non-experimental) studies. Then, cautious methodological considerations allow approximating so-called quasi- or pseudo-experiments, thereby strengthening validity properties (Campbell, Stanley, & Gage, 1963). Despite, few psychologists would draw unambiguous conclusions about causality in such designs. It could be argued that to foster theoretical advancements, psychology would benefit from applying alternative methods, inspired by economics, to reinforce conclusions about causal mechanics, although such approaches often imply relying on untestable assumptions,<br><br>
In psychology, the golden standards allowing estimating causality effects are forms of RCT, operationalized as carefully planned experiments, where the effects of a manipulated independent variable on a dependent variable are assessed, under strict methodological control. This setting requires that the researcher consider several psychometric issues, such as internal and external validity, construct validity, and generalizability. Again, the ideal RCT setting is a notoriously difficult methodological design to implement in many research settings (for practical or ethical reasons), so that psychologists also must very often rely on observational (a.k.a. as non-experimental) studies. Then, cautious methodological considerations allow approximating so-called quasi- or pseudo-experiments, thereby strengthening validity properties (Campbell, Stanley, & Gage, 1963). Despite, few psychologists would draw unambiguous conclusions about causality in such designs. It could be argued that to foster theoretical advancements, psychology would benefit from applying alternative methods, inspired by economics, to reinforce conclusions about causal mechanics, although such approaches often imply relying on untestable assumptions,<br><br>
In social sciences, the concept of causality is also closely linked to moderated, mediated, and spill-over effects. These proximal concepts allow for somewhat indirect estimations or what may be mechanisms of utmost importance in understanding causality in the social world by means of innovative research designs and statistical strategies (Hong, 2015).
In social sciences, the concept of causality is also closely linked to moderated, mediated, and spill-over effects. These proximal concepts allow for somewhat indirect estimations or what may be mechanisms of utmost importance in understanding causality in the social world by means of innovative research designs and statistical strategies (Hong, 2015). Another line of research present in sociology, narrative formalism or narrative positivism, stresses the description of whole [[Trajectories|trajectories]] as an alternative way to deal with causality (Abbott, 1992).  <br>
<br>
<br>
Authors: Paulo Ghisletta
<br>
<br>
==References==
==References==
Abbott, A. (1992). From causes to events: Notes on narrative positivism. ''Sociological methods & research'', 20(4), 428-455.
Angrist, J. D., & Pischke, J.-S. (2009). ''Mostly Harmless Econometrics''. Princeton University Press.<br>
Angrist, J. D., & Pischke, J.-S. (2009). ''Mostly Harmless Econometrics''. Princeton University Press.<br>
Angrist, J., & Pischke, J.-S. (2010). The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics (Working Paper No. 15794; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w15794<br>
Angrist, J., & Pischke, J.-S. (2010). The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics (Working Paper No. 15794; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w15794<br>
Line 19: Line 21:
Hong, G. (2015). ''Causality in a social world: Moderation, mediation and spill-over''. John Wiley & Sons.<br>
Hong, G. (2015). ''Causality in a social world: Moderation, mediation and spill-over''. John Wiley & Sons.<br>
Lewis, D. K. (1973). Causation. ''Journal of Philosophy'', 70(17), 556–567. https://doi.org/10.2307/2025310
Lewis, D. K. (1973). Causation. ''Journal of Philosophy'', 70(17), 556–567. https://doi.org/10.2307/2025310
==Semantic network visualisation==
Click to activate zoom- and drag-fonctionnality
''(scroll to zoom, drag nodes to move, click and hold nodes to open next level)''
{{#network:
| class = col-lg-3 mt-0
| exclude = Main Page ; Sitemap ; Worksheet
}}

Latest revision as of 14:50, 20 April 2021

Causality (here used synonymously with causation, cause and effect) has been a major scope of science for millennia, and any discussion of causality cannot be summarized by a couple of pages in a general glossary. The purpose here is to discuss how this concept may be understood and estimated differently in some disciplines studying “vulnerability” within LIVES.

Broad philosophical discussions of causality include revealing underlying mechanisms behind observed or implied associations (e.g., between stimuli and responses, about genetics and epigenetics in human characteristics) and opposing free will and consciousness (e.g., nature vs. nurture in psychology, agency vs. structure in sociology). In modern science, one of the simplest definitions of causality is the relation of determination between a cause (entity A) and a consequence (entity B), such that “if A then B” and “if not A then not B.” Counterfactual arguments (“if A had not occurred, then B would not have occurred”) complemented the previous regularity definition (Lewis, 1973), but have never come to totally replace it. It is commonly agreed that the association be time-ordered, such that A occurs before B in time.

Statistics has primarily focused on measuring and quantitatively estimating the effects of causes on their consequences, by following Hume’s analysis of causation and Popper’s principles of verification and falsifiability (Holland, 1986). This focus has percolated somewhat differently in different disciplines, ranging from simple linear regression models to nonlinear dynamic systems of multivariate equations. From a methodological perspective, most agree with the assertion that carefully planned double blind, randomized control trials (RCT) are the simplest and safest setting for estimating causality relations. Of course, in many research settings RCTs are simply not possible, thus forcing scholars to creatively and intelligently propose alternative settings that at best approximate the at-times unachievable and impossible standard of excellence represented by experimental settings (which, strictly speaking, are also imperfect, in that it is impossible to measure the same unit of observation under both the presence (e.g., treatment) and the absence (e.g., control) of the cause at the exact same time; this problem can however be dealt with by assuming untestable assumptions, such as time invariance of exposure effects. Here, again, counterfactual arguments become highly relevant).

In economics, the work by Granger (1969) has set a useful framework for estimating causal inference (Heckman, 2008; Hoover, 2006). Put in very simple terms, the Granger causality test allows assessing whether one variable assessed earlier can forecast another assessed later, where both variables are measured by time series. Given the historical importance of time series data in economic research, this definition has served well to establishing causality relations. Nowadays, discussions about the possibility of estimating causality in economics do not revolve around time series analyses, but are centered on ingenious methods that are applicable to observational studies, and that ought to be used, whenever possible, in conjunction with RCTs. These include, but are not limited to, linear and discontinuity regression designs, differences-in-differences methods, use of instrumental variables, and propensity scores and other matching techniques, whose use has thrived in recent years (Angrist & Pischke, 2009; 2010). However, the debate around the utility of such methods in discussing causality is not resolved and remains well alive today (e.g., Banerjee, Duflo, & Kremer, 2016; Deaton & Cartwright, 2018).

In psychology, the golden standards allowing estimating causality effects are forms of RCT, operationalized as carefully planned experiments, where the effects of a manipulated independent variable on a dependent variable are assessed, under strict methodological control. This setting requires that the researcher consider several psychometric issues, such as internal and external validity, construct validity, and generalizability. Again, the ideal RCT setting is a notoriously difficult methodological design to implement in many research settings (for practical or ethical reasons), so that psychologists also must very often rely on observational (a.k.a. as non-experimental) studies. Then, cautious methodological considerations allow approximating so-called quasi- or pseudo-experiments, thereby strengthening validity properties (Campbell, Stanley, & Gage, 1963). Despite, few psychologists would draw unambiguous conclusions about causality in such designs. It could be argued that to foster theoretical advancements, psychology would benefit from applying alternative methods, inspired by economics, to reinforce conclusions about causal mechanics, although such approaches often imply relying on untestable assumptions,

In social sciences, the concept of causality is also closely linked to moderated, mediated, and spill-over effects. These proximal concepts allow for somewhat indirect estimations or what may be mechanisms of utmost importance in understanding causality in the social world by means of innovative research designs and statistical strategies (Hong, 2015). Another line of research present in sociology, narrative formalism or narrative positivism, stresses the description of whole trajectories as an alternative way to deal with causality (Abbott, 1992).

Authors: Paulo Ghisletta

References

Abbott, A. (1992). From causes to events: Notes on narrative positivism. Sociological methods & research, 20(4), 428-455. Angrist, J. D., & Pischke, J.-S. (2009). Mostly Harmless Econometrics. Princeton University Press.
Angrist, J., & Pischke, J.-S. (2010). The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics (Working Paper No. 15794; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w15794
Banerjee, A. V., Duflo, E., & Kremer, M. (2016). The influence of randomized controlled trials on development economics research and on development policy. The State of Economics, The State of the World.
Campbell, D. T., Stanley, J. C., & Gage, N. L. (1963). Experimental and quasi-experimental designs for research (pp. ix, 84). Houghton, Mifflin and Company.
Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21. https://doi.org/10.1016/j.socscimed.2017.12.005
Granger, C. W. J. (1969). Investigating Causal Relations by Econometric Models and Cross-spectral Methods. Econometrica, 37(3), 424–438. JSTOR. https://doi.org/10.2307/1912791
Heckman, J. J. (2008). Econometric Causality. International Statistical Review, 76(1), 1–27. https://doi.org/10.1111/j.1751-5823.2007.00024.x
Holland, P. W. (1986). Statistics and Causal Inference. Journal of the American Statistical Association, 81(396), 945–960. https://doi.org/10.1080/01621459.1986.10478354
Hoover, K. D. (2006). Causality in Economics and Econometrics (SSRN Scholarly Paper ID 930739). Social Science Research Network. https://doi.org/10.2139/ssrn.930739
Hong, G. (2015). Causality in a social world: Moderation, mediation and spill-over. John Wiley & Sons.
Lewis, D. K. (1973). Causation. Journal of Philosophy, 70(17), 556–567. https://doi.org/10.2307/2025310

Semantic network visualisation

Click to activate zoom- and drag-fonctionnality (scroll to zoom, drag nodes to move, click and hold nodes to open next level)