[USERS ARE RESPONSIBLE FOR COMPLIANCE WITH COPYRIGHT RESTRICTIONS]
Research articles
Information in Biology: A Fictionalist Account [Noûs, 2011, 45(4): 640-657]
I offer a treatment of informational concepts in biology. I argue that these concepts are best seen as explanatory metaphors: fictions that aid biologists in latching on to real biological processes. Informational fictions play a genuine theoretical role, so we should take them seriously. But not too seriously: there are no informational "things" within organisms, nor is information a fundamental property or aspect of life.
Game Theory, Indirect Modelling and the Origin of Morality [Journal of Philosophy, 2011, CVIII (4): 171-187]
This paper discusses some recent work on the evolution of justice, especially that of Brian Skyrms and his students. I argue that this body of work has made internal progress, but that some key idealizations diminish its explanatory import with respect to the origin of morality in the real world. I diagnose the (false) sense of explanatory progress in this area as a characteristic pitfall of modelling that proceeds in a highly indirect manner.
Models, Fictions & Realism: Two Packages [Philosophy of Science, 2012, 79(5): 738-748]
Some philosophers of science – myself included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to choose among them.
Three Kinds of "New Mechanism" [Biology & Philosophy, 2013, 28(1): 99-114]
I offer taxonomy of existing work on mechanisms. Mechanist projects differ in scope as well as in whether they recruit the notion of mechanism for the purposes of constructing a theory of causation or explanation. The final part of the paper uses the taxonomy developed earlier to re-frame the debate over whether natural selection is a mechanism.
Abstraction and the Organization of Mechanisms [with William Bechtel, Philosophy of Science, 2013, 80(2): 241-261]
Mechanists often note that the organization of a mechanism is crucial for its functioning. We look at once class of recent attempts to model organizational aspects of biological phenomena. We argue that modelers often employ a strategy of abstraction, moving away from concrete details of parts and their interactions, in an effort to distill organizational features. We suggest some reasons why abstraction-based explanation has been neglected in the literature on mechanisms.
What was Hodgkin and Huxley's Achievement? [British Journal for the Philosophy of Science, 2014, 65(3): 469-492]
The Hodgkin-Huxley (HH) model of the action potential explains how nerve cells "fire". It is regarded by many as the most important theoretical achievement in modern neurobiology. In recent work Carl Craver has downplayed the explanatory significance of the HH model, because it lacked important molecular information. I take another look at the model, arguing that the lack of molecular detail is actually a virtue. By abstracting from the underlying molecules, HH were able to provide an essentially correct, and highly influential, picture of the relationship between the (skeletaly described) molecular constituents and the overall firing behavior of the cell. Similar forms of abstraction are common in biology. Taking this into account might require adjustments to the mechanistic conception of explanation that currently dominates the philosophy of biology.
I offer a treatment of informational concepts in biology. I argue that these concepts are best seen as explanatory metaphors: fictions that aid biologists in latching on to real biological processes. Informational fictions play a genuine theoretical role, so we should take them seriously. But not too seriously: there are no informational "things" within organisms, nor is information a fundamental property or aspect of life.
Game Theory, Indirect Modelling and the Origin of Morality [Journal of Philosophy, 2011, CVIII (4): 171-187]
This paper discusses some recent work on the evolution of justice, especially that of Brian Skyrms and his students. I argue that this body of work has made internal progress, but that some key idealizations diminish its explanatory import with respect to the origin of morality in the real world. I diagnose the (false) sense of explanatory progress in this area as a characteristic pitfall of modelling that proceeds in a highly indirect manner.
Models, Fictions & Realism: Two Packages [Philosophy of Science, 2012, 79(5): 738-748]
Some philosophers of science – myself included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to choose among them.
Three Kinds of "New Mechanism" [Biology & Philosophy, 2013, 28(1): 99-114]
I offer taxonomy of existing work on mechanisms. Mechanist projects differ in scope as well as in whether they recruit the notion of mechanism for the purposes of constructing a theory of causation or explanation. The final part of the paper uses the taxonomy developed earlier to re-frame the debate over whether natural selection is a mechanism.
Abstraction and the Organization of Mechanisms [with William Bechtel, Philosophy of Science, 2013, 80(2): 241-261]
Mechanists often note that the organization of a mechanism is crucial for its functioning. We look at once class of recent attempts to model organizational aspects of biological phenomena. We argue that modelers often employ a strategy of abstraction, moving away from concrete details of parts and their interactions, in an effort to distill organizational features. We suggest some reasons why abstraction-based explanation has been neglected in the literature on mechanisms.
What was Hodgkin and Huxley's Achievement? [British Journal for the Philosophy of Science, 2014, 65(3): 469-492]
The Hodgkin-Huxley (HH) model of the action potential explains how nerve cells "fire". It is regarded by many as the most important theoretical achievement in modern neurobiology. In recent work Carl Craver has downplayed the explanatory significance of the HH model, because it lacked important molecular information. I take another look at the model, arguing that the lack of molecular detail is actually a virtue. By abstracting from the underlying molecules, HH were able to provide an essentially correct, and highly influential, picture of the relationship between the (skeletaly described) molecular constituents and the overall firing behavior of the cell. Similar forms of abstraction are common in biology. Taking this into account might require adjustments to the mechanistic conception of explanation that currently dominates the philosophy of biology.
Machine-Likeness and Explanation-by-Decomposition [Philosophers' Imprint 14(6), 2014)]
Machine analogies play a prominent part in biology, especially in areas such as molecular cell biology and related parts of development, neuroscience and genetics. This paper provides an account of what makes a system machine-like, relying on a notion of causal order. It then looks at models and how they may represent a system as being more or less orderly. The (potentially changing) role of machine analogies is illustrated by a look at two examples from present day biology - the study of macromolecules, and theoretical models of pattern formation.
Machine analogies play a prominent part in biology, especially in areas such as molecular cell biology and related parts of development, neuroscience and genetics. This paper provides an account of what makes a system machine-like, relying on a notion of causal order. It then looks at models and how they may represent a system as being more or less orderly. The (potentially changing) role of machine analogies is illustrated by a look at two examples from present day biology - the study of macromolecules, and theoretical models of pattern formation.
Model Organisms are not (Theoretical) Models [With Adrian Currie, British Journal for Philosophy of Science, 2015, 66(2): 327-348]
Many biological investigations are organized around a small group of species, often referred to as “model organisms”, such as the fruit fly Drosophila melanogaster. The terms “model” and “modeling” also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka-Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different epistemic characters. Theoretical modeling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account.
Design sans Adaptation [with Sara Green and William Bechtel. European Journal for Philosophy of Science, 2015, 5(1): 15-29]
Design thinking in general, and optimality modeling in particular, have traditionally been associated with adaptationism – a research agenda that gives pride of place to natural selection in shaping biological characters. Our goal is to evaluate the role of design thinking in non-evolutionary analyses. Specifically, we focus on research into abstract design principles that underpin the functional organization of extant organisms. Drawing on case studies from engineering-inspired approaches in biology we show how optimality analysis, and other design-related methods, plays a specific methodological role that is tangential to the study of adaptation. To account for the role these reasoning strategies play in contemporary biology we therefore suggest a reevaluation of the connection between design thinking and adaptationism.
Design thinking in general, and optimality modeling in particular, have traditionally been associated with adaptationism – a research agenda that gives pride of place to natural selection in shaping biological characters. Our goal is to evaluate the role of design thinking in non-evolutionary analyses. Specifically, we focus on research into abstract design principles that underpin the functional organization of extant organisms. Drawing on case studies from engineering-inspired approaches in biology we show how optimality analysis, and other design-related methods, plays a specific methodological role that is tangential to the study of adaptation. To account for the role these reasoning strategies play in contemporary biology we therefore suggest a reevaluation of the connection between design thinking and adaptationism.
Modeling without Models [Philosophical Studies, 2015, 172(3): 781-798]
Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. A number of recent accounts answer these questions in terms of indirect representation and analysis. Such views treat the model as a bona fide object (“the model system”), specified by the modeler and used to represent and reason about some portion of the concrete empirical world (“the target system”). On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things, akin to fictional characters. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. On the other hand, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on recent work by Stephen Yablo concerning the notion of partial truth.
Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. A number of recent accounts answer these questions in terms of indirect representation and analysis. Such views treat the model as a bona fide object (“the model system”), specified by the modeler and used to represent and reason about some portion of the concrete empirical world (“the target system”). On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things, akin to fictional characters. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. On the other hand, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on recent work by Stephen Yablo concerning the notion of partial truth.
Engineering and Biology: Counsel for a Continued Relationship [with Brett Calcott, Orkun Sowyer, Mark Siegel and Andreas Wagner. Biological Theory, 2015. 10(1): 50-59]
Biologists draw on engineering as a matter of course. Criticisms of appeals to engineering highlight differences between biology and engineering, urging caution, and even recommending outright abandonment of concepts and tools from engineering. Here we aim to reconfigure and clarify the link between biology and engineering, presenting it in a more favorable light. We do so by, first, arguing that critics operate with a narrow and incorrect notion of how engineering actually works, and of what the reliance on ideas from engineering entails. Second, we diagnose and diffuse one significant source of concern about appeals to engineering, namely that they are inherently and problematically metaphorical.
Engineering and Biology: Counsel for a Continued Relationship [with Brett Calcott, Orkun Sowyer, Mark Siegel and Andreas Wagner. Biological Theory, 2015. 10(1): 50-59]
Biologists draw on engineering as a matter of course. Criticisms of appeals to engineering highlight differences between biology and engineering, urging caution, and even recommending outright abandonment of concepts and tools from engineering. Here we aim to reconfigure and clarify the link between biology and engineering, presenting it in a more favorable light. We do so by, first, arguing that critics operate with a narrow and incorrect notion of how engineering actually works, and of what the reliance on ideas from engineering entails. Second, we diagnose and diffuse one significant source of concern about appeals to engineering, namely that they are inherently and problematically metaphorical.
Causal Order and Kinds of Robustness [Lanscapes of Collectivity in the Life Sciences, MIT Press, Edited by Snait Gissis, Ehud Lamm & Ayelet Shavit]
This paper derives from a broader project dealing with the notion of causal order. I use this term to signify two kinds of parts-whole dependence: Orderly systems have rich, decomposable, internal structure; specifically, parts play differential roles, and interactions are primarily local. Disorderly systems, in contrast, have a homogeneous internal structure, such that differences among parts and organizational features are less important. Orderliness, I suggest, marks one key difference between individuals and collectives.
My focus here will be the connection between order and robustness, i.e. functional resilience in the face of internal or environmental perturbations. I distinguish three varieties of robustness. Ordered robustness is grounded in the system’s specific organizational pattern. In contrast, disorderly robustness stems from the aggregate outcome of many similar parts. In between, we find semi-ordered robustness, wherein a messy ensemble of elements is subjected to a selection or stabilization mechanism. I give brief characterizations of each category, discuss examples and remark on the connection between the order/disorder axis and the notions of individual versus collective.
This paper derives from a broader project dealing with the notion of causal order. I use this term to signify two kinds of parts-whole dependence: Orderly systems have rich, decomposable, internal structure; specifically, parts play differential roles, and interactions are primarily local. Disorderly systems, in contrast, have a homogeneous internal structure, such that differences among parts and organizational features are less important. Orderliness, I suggest, marks one key difference between individuals and collectives.
My focus here will be the connection between order and robustness, i.e. functional resilience in the face of internal or environmental perturbations. I distinguish three varieties of robustness. Ordered robustness is grounded in the system’s specific organizational pattern. In contrast, disorderly robustness stems from the aggregate outcome of many similar parts. In between, we find semi-ordered robustness, wherein a messy ensemble of elements is subjected to a selection or stabilization mechanism. I give brief characterizations of each category, discuss examples and remark on the connection between the order/disorder axis and the notions of individual versus collective.
Idealization and Abstraction: Refining the Distinction [Synthese, doi: https://doi.org/10.1007/s11229-018-1721-z]
Idealization and abstraction are central concepts in the philosophy of science and in science itself. Despite (or perhaps because) of this, there is no commonly agreed-upon understanding of their content and significance. This hampers communication and progress in the field, and has led some authors down blind allies. Here, I will try to correct this situation by offering an account of idealization and abstraction, emphasizing the similarities and differences between them. The account relies, and is intended as an improvement, on an existing view due to Martin Thomson-Jones (2005) and Peter Godfrey-Smith (2009). On this line of thought, abstraction involves the omission of detail, whereas idealization consists in a deliberate mismatch between a description (or a model) and the world. I refine their distinction in several ways and discuss some central implications of setting it out this way.
Idealization and abstraction are central concepts in the philosophy of science and in science itself. Despite (or perhaps because) of this, there is no commonly agreed-upon understanding of their content and significance. This hampers communication and progress in the field, and has led some authors down blind allies. Here, I will try to correct this situation by offering an account of idealization and abstraction, emphasizing the similarities and differences between them. The account relies, and is intended as an improvement, on an existing view due to Martin Thomson-Jones (2005) and Peter Godfrey-Smith (2009). On this line of thought, abstraction involves the omission of detail, whereas idealization consists in a deliberate mismatch between a description (or a model) and the world. I refine their distinction in several ways and discuss some central implications of setting it out this way.
Models & Scientific Realism: Strange Bedfellows? [The Routledge Handbook on Scientific Realism, Edited by Juha Saatsi]
Abstract under revision
Abstract under revision
Realism and the Debunking Challenge: How Evolution (Ultimately) Matters
[With Yair Levy, Journal of Ethics and Social Philosophy, November 2016]
Evolutionary debunking arguments (EDAs) have attracted extensive attention in meta-ethics, as they pose an important challenge to moral realism, and may even have applications in other domains. Mogensen (2015) suggests that EDAs contain a fallacy, by confusing two distinct forms of biological explanation – ultimate and proximate. If correct, the point is of considerable importance: evolutionary genealogies of human morality are simply irrelevant for debunking. But we argue that the actual situation is subtler: while ultimate claims do not strictly entail proximate ones, there are important evidential connections between the two. Attending to these connections clears ground for a new and improved formulation of EDAs. However, it also brings into view some possible problems with EDAs that have been largely neglected so far.
The Unity of Neuroscience: A Flat View
[Synthese, 2016, 193(12): 3843–3863]
This paper offers a novel view of unity in neuroscience. I set out by discussing problems with the classical account of unity-by-reduction, due to Oppenheim and Putnam. That view relies on a strong notion of levels, which has substantial problems. A more recent alternative, the mechanistic “mosaic” view due to Craver, does not have such problems. But I argue that the mosaic ideal of unity is too minimal, and we should, if possible, aspire for more. Relying on a number of recent works in theoretical neuroscience—network motifs, canonical neural computations (CNCs) and design-principles—I then present my alternative: a “flat” view of unity, i.e. one that is not based on levels. Instead, it treats unity as attained via the identification of recurrent explanatory patterns, under which a range of neuroscientific phenomena are subsumed. I develop this view by recourse to a causal conception of explanation, and distinguish it from Kitcher’s view of explanatory unification and related ideas. Such a view of unity is suitably ambitious, I suggest, and has empirical plausibility. It is fit to serve as an appropriate working hypothesis for 21st century neuroscience.
Evolutionary Modeling and Political Stability [forthcoming in Biology & Philosophy]
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can feed into normative thinking – focusing on stability. I will discuss two (related) forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy.
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can feed into normative thinking – focusing on stability. I will discuss two (related) forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy.
Why Experiments Matter (with Adrian Currie) [Forthcoming in Inquiry]
We argue that experiments play a distinctive and privileged role in science. They do so in virtue of two properties. Experiments are controlled investigations of specimens. A ‘specimen’ is a token of the type of phenomenon under investigation. Experimental scientists isolate and study representative samples of the systems that interest them. ‘Control’ is the scientific capacity to conduct repeated, finely varied manipulations of experimental systems. Experiments matter because the combination of these two properties allows scientists to produce rich targeted information about their investigative targets. Recently, experimentation’s privilege has come under pressure from two directions. First, the experiment/theory distinction has been blurred via examination of the experiment-like roles played by theoretical devices like models and simulations. Second, the experiment/observation distinction is blurred by investigation of naturally occurring cases which have experimental properties: ‘natural experiments’. Here, scientists (more or less) directly observe specimen of their investigative targets, making the relevance between their observations and their investigative goals (hypothesis testing, data-gathering, identifying phenomena, exploring systems of interest, etc…) relatively straightforward. However, we argue that the combination of control and specimen grants experiments epistemic privilege: it allows scientists to generate remarkably targeted, rich information about the system of interest. Models can be controlled, but are surrogates rather than specimen. Naturally occurring events can be specimen, but are not controlled. Experiments, then, are special.
We argue that experiments play a distinctive and privileged role in science. They do so in virtue of two properties. Experiments are controlled investigations of specimens. A ‘specimen’ is a token of the type of phenomenon under investigation. Experimental scientists isolate and study representative samples of the systems that interest them. ‘Control’ is the scientific capacity to conduct repeated, finely varied manipulations of experimental systems. Experiments matter because the combination of these two properties allows scientists to produce rich targeted information about their investigative targets. Recently, experimentation’s privilege has come under pressure from two directions. First, the experiment/theory distinction has been blurred via examination of the experiment-like roles played by theoretical devices like models and simulations. Second, the experiment/observation distinction is blurred by investigation of naturally occurring cases which have experimental properties: ‘natural experiments’. Here, scientists (more or less) directly observe specimen of their investigative targets, making the relevance between their observations and their investigative goals (hypothesis testing, data-gathering, identifying phenomena, exploring systems of interest, etc…) relatively straightforward. However, we argue that the combination of control and specimen grants experiments epistemic privilege: it allows scientists to generate remarkably targeted, rich information about the system of interest. Models can be controlled, but are surrogates rather than specimen. Naturally occurring events can be specimen, but are not controlled. Experiments, then, are special.
Book reviews
Review essay of "In Search of Mechanisms" [forthcoming, Philosophy of Science], discusses Carl Craver and Lindley Darden's book on scientific Discovery.
Anchoring Fictional Models [Biology & Philosophy, 2013, doi: 10.1007/s10539-013-9370-6] is a review of Adam Toon's book Models as Make Believe.
Makes a Difference [Biology & Philosophy, 2011, 26: 459-467] is a review of Michael Strevens' account of scientific explanation, expounded in his book Depth.
Explaining What? [Biology & Philosophy, 2009, 24: 137-145] is a review of Carl Craver's book Explaining the Brain.
The Organic Codes [with Eva Jablonka, Acta Biotheoretica, 52 (1): 65-69] is a review of this book by Marcelo Barbieri.
Anchoring Fictional Models [Biology & Philosophy, 2013, doi: 10.1007/s10539-013-9370-6] is a review of Adam Toon's book Models as Make Believe.
Makes a Difference [Biology & Philosophy, 2011, 26: 459-467] is a review of Michael Strevens' account of scientific explanation, expounded in his book Depth.
Explaining What? [Biology & Philosophy, 2009, 24: 137-145] is a review of Carl Craver's book Explaining the Brain.
The Organic Codes [with Eva Jablonka, Acta Biotheoretica, 52 (1): 65-69] is a review of this book by Marcelo Barbieri.