Archive for November, 2015


“(Not) Apriori causal models of evolution and the analytic-synthetic distinction” (11/20/2015)

November 17, 2015

Guillaume Schlaepfer

Abstract: In this talk I will present a very simple model of evolutionary theory that Eliott Sober (2011) considers as an instance of apriori causal model, hence providing a counter-example to the humean thesis that causal knowledge is never apriori.
I will propose an interpretation of this model according to which it is better construed as a complex analytic statement, thus informative about the assumptions of the theoretical framework in which it is developped rather than about types of causal relations instantiated out there in the world.
Since analytic statements are sometimes considered as a possible source of apriori knowledge of the world (although maybe not so often anymore), I should also try to demonstrate that this model tends to show the contrary for the ascription of the (dispositional) concepts involved in the model relies on empirical knowledge of the model’s consequences in the first place.
Finally, I would like to relate this case study to the general understanding of the analytic-synthetic distinction in particular over the background of a carnapian conception of it.


“What the brain does by itself” (11/16/2015)

November 16, 2015

Philipp Haueis

Abtract: In the last 20 years, neuroscientists have observed coherent patterns of brain activity in the absence of a cognitive task, but the function of so-called ‘resting state networks’ that are connected by these patterns remains a matter of debate. Bechtel (2013) has recently argued that in order to link the endogenous activity in these networks to cognitive processing, neuroscientists and philosophers have to acknowledge that the brain is an endogenously active mechanism. Like other biological systems, the brain has to maintain homeostatic equilibrium in the face of biological decay. As a result of this requirement, neural entities from cells to whole-brain networks are constantly active, even in the absence of sensory input. Bechtel argues that this important biological phenomenon has been concealed by the reactive understanding of the brain as processing information from sensory input to motor output, while ignoring how ongoing, endogenous brain activity is modulating information processing. Similarly, philosophers initially defined mechanisms as proceeding sequentially from set-up to termination conditions, while neglecting that endogenously active mechanisms possess negative feedback loops that defy a sequentialist analysis. Therefore Bechtel claims that endogenous activity in resting state networks needs to be taken in to account if we want to understand the information processing mechanisms involved in cognition. In this paper, I argue that Bechtel’s aim to link the observation of endogenous network activity directly to cognitive processing rests on an ambiguity in his analysis of “endogenous brain activity”. While it may be true that the causal role function of these networks in certain cognitive contexts may contribute to information processing mechanisms, I argue that the intrinsic function of endogenous network activity is to maintain and repair the functional organization of the brain at a systems level. I show how distinguishing between these two concepts of function resolves certain problem cases for Bechtel’s interpretation, and review recent evidence that speaks in favor of viewing the function of endogenous network activity to maintain system level organization.


“Rare+ Data” (11/13/2015)

November 10, 2015

Nora Boyd

Abstract: Somehow good scientific theorizing is constrained by the way the world is. Nature is, as we (good empiricists) all know, the “ultimate arbiter”. But what exactly is it that does the constraining? Sense data? Instrumental records? Measured parameter values? Facts? Each of these are plausibly “empirical results”, which have been more or less processed (or if you prefer: are more or less theory-laden). A problem that one immediately encounters in trying to answer this question is the thoroughgoing fallibility of empirical results, no matter how processed. After all, someone could have misjudged the gauge reading, the crucial cable in the experimental apparatus could have been unplugged, the files could have been corrupted, we could discover that a key assumption of the model renders it inapplicable for the case at hand, or (heaven forbid) we may even come to believe that the central terms of our most cherished theories fail to refer to anything in the world. In light of the variegated fallibility of empirical results, what prevents the whole scientific enterprise from being consumed by the roaring flames of skepticism? In this talk I will present a characterization of “rare+” data, which I suggest picks out the sort of empirical results that stably constrain viable scientific theorizing. I will present a few generic warm up examples and a juicy extended one (the Hulse-Taylor binary pulsar) in order to demonstrate concretely what my account picks out in particular cases. While not utterly infallible, I suggest that the sort of empirical results I identify are sufficiently incorrigible to constitute a cumulative body of empirical constraints.