h1

Archive (2006-10)

2010 November 19

Under-determination and Stability in Time-Free Models
Jonathan Livengood

I discuss the semantics of time-free statistical models and their relation to cross-sectional data. I show that correct causal models generally cannot be derived from cross-sectional data when the causal system is not in equilibrium. I present a simple theorem showing that the graphical structure of models of causal systems in equilibrium are determined only up to self loops. I discuss a theorem showing that the short-run effects of interventions are under-determined, but the long-run effects are determined by cross-sectional data, when the causal system is in equilibrium, provided the causal system is stable.

2010 October 29
Modularity, Theory of Mind, and Autism Spectrum Disorder
Marcus Adams

The theory of mind (ToM) deficit associated with autism spectrum disorder has been a central topic in the debate about the modularity of the mind. In a series of papers, Philip Gerrans and Valerie Stone argue that positing a ToM module does not best explain the deficits exhibited by individuals with autism (Gerrans 2002; Stone & Gerrans 2006a, 2006b; Gerrans & Stone 2008). In this paper, I first criticize Gerrans and Stone’s (2008) account. Second, I discuss various studies of individuals with autism, arguing that they are best explained by positing a higher-level, domain specific ToM module. Third, I argue that conceiving of autism as a spectrum disorder rather than as a unified syndrome lends further plausibility to the modular explanation.

2010 October 29
Dalton’s Chemical Atoms vs. Duhem’s Chemical Equivalents
Karen Zwier

Paul Needham has made the provocative argument that Daltonian atomism is not explanatory. I take a contrary view. I aim to show that when John Dalton’s atomism is examined in its own context and relative to the questions that Dalton himself was posing, it is indeed an explanatory theory. Secondly, I show that Dalton’s atomism is explanatory even by the standards of the famous anti-atomist Pierre Duhem. The main part of my paper attempts to examine through two case studies (of Dalton and Duhem) the relationship between belief in the reality of an entity in a scientific theory and the belief in that entity’s role as a satisfactory explanans. For both Dalton and Duhem, I examine the interplay between their views of explanation and their views on the reality of chemical atoms. I will show that the positions that both Dalton and Duhem take on chemical atoms are intimately related to their respective views on what explanation is and what role explanation plays (or doesn’t play) in science.

2010 October 22
Katie Tabb
Charles Darwin and the Natural History of Reason

Late in his career Charles Darwin famously used photographs of mental patients to illustrate the expression of emotions. My paper traces Darwin’s interest in insanity back almost fifty years to his stint as a medical student at Edinburgh. The young Darwin heard phrenologists like W.A.F. Browne and transmutationists like Robert Grant reject the rationalist psychology of the Scottish Enlightenment in favor of the associationism of Locke and Hume. Phrenology brought the mind under the control of science by locating it with precision within the brain, where it could be annexed evolutionary theorists – and later by the first mental psychologists, whom Darwin would influence profoundly.

While Darwin was never an enthusiast for phrenology, I argue that the so-called “ metaphysical notebooks,” written fifteen years after Darwin left Edinburgh, reveal the lasting influence of this period upon his thought. Eager for a materialist revolution – “ M. Le Comte’ s idea of theological state of science, grand idea,” he wrote to himself in 1838 – Darwin drew on the insights into mental illness of his physician father and transmutationist grandfather to sketch a primitive theory of embodied reason. In fragments, Darwin suggests that the insane behaviors of the individual are analogous to the instinctual behaviors of the species. On both the individual and the species, memory leaves its mark.

I conclude that while Darwinian evolution revolutionized the sciences of mind, it was itself inspired by the efforts of earlier natural philosophers and, notably, medical materialists, to reveal the natural history of reason through empirical methods.

2010 October 22
Tom Pashby
Projective Geometry and the Origins of the Dirac Equation

Several authors have attempted to make a connection between Dirac’s q-number theory and his knowledge of projective geometry, but in the AHQP interviews conducted by Kuhn where this connection is first made, Dirac denied any connection to his early work in non-commutative algebra. Instead Dirac consistently emphasized the value of projective geometry as a means for visualizing quantities in Minkowski space-time, which suggests that he may have found these techniques useful in his derivation of the electron equation and first encounters with Dirac spinors.

Existing accounts of the origins of the electron equation rely almost exclusively on Dirac’s later testimony and the published paper of 1928. However, there exists a series of rough calculations that reveal the first stages of his recognition of the explicit form of the equation he sought. This manuscript was found among the archival material held by the FSU at Tallahassee, where Dirac spent his final years. The first page contains a statement of the general form of the relativistic linear wave equation, and twenty-two pages later Dirac finds an explicit representation of what are now known as the Dirac matrices.

The appearance within in this manuscript of what is obviously projective geometry on a number of pages provides good evidence that Dirac was using these techniques at the time of his discovery, although the precise role within the derivation remains unclear. However, the historical relevance of this document is clearly evident, and so provides an unprecedented opportunity to corroborate and challenge the existing accounts of the path taken by Dirac in his discovery of the electron equation.

2010 October 8
Descartes, Functional Analysis and Teleology
Peter Distelzweig

It is an old commonplace that in his physics Descartes avoids any appeal to final causes and so rejects teleological explanation. This commonplace is rooted in a number of well known texts in which he explicitly rejects, in physics, appeals to “final causes” and “divine purposes” (e.g. Meditation IV, his Replies to Gassendi, Principles I.28, III.2-3, Conversation with Burman). However, this commonplace has come under significant scrutiny in recent years. Alison Simmons [2001] introduces a distinction between “ends with which” and “ends for which”, arguing that Descartes only rejects explanations making appeal to the latter, while allowing and even identifying certain “ends with which” human mind-body composites are constituted. Gary Hatfield [2007] has argued that even in the case of animals, Descartes introduces teleological resources and suggests that “the structure of such teleological thinking, and its place in Cartesian metaphysics, warrants further investigation.”

In this paper I aim to begin just such an investigation. I start by developing a series of historically and philosophically relevant distinctions regarding kinds of teleological thinking; here I will especially stress a distinction between (a) functional analysis at service of fundamentally ateleological explanations and (b) properly explanatory teleology. As an example of the latter, I will briefly discuss Harvey’s teleological explanations of variations in comparative heart structure in Chapter 17 of De Motu Cordis.

With this ground work in place I will argue for three old fashioned claims: (1) in contexts in which the mind-body composite is not under consideration Descartes aims to avoid appeal to explanatory teleological principles; (2) in such contexts Descartes’ philosophical system must avoid appeal to such principles’; (3) though Descartes largely succeeds in avoiding such principles, he “slips” in several places, precisely when considering comparative cardiology under the influence of Harvey’s work.

I will conclude by suggesting that even in the context of the mind-body composite Descartes’ system is inhospitable to explanatory teleology.

2010 September 17
Realism, Structuralism and Instrumentalism in Dirac’s Prediction of the Positron
Tom Pashby

Novel predictive success has been regarded as the most compelling evidence in support of scientific realism. P. A. M. Dirac’s prediction of antimatter from his relativistic electron equation is an interesting case since does not fit the deductive-nomological model of prediction, yet prefigures many similar predictive successes of the quantum field theories that were to follow. In this paper I argue for three related claims.

i) Dirac’s discovery of his electron equation relied upon mathematical arguments that had empirical grounding only through the individual theories that were combined in the Dirac equation (i.e. special relativity and his transformation theory of quantum mechanics). The positron was predicted on the basis of the particular mathematical form of the solutions of this equation, which arose through just these consistency requirements.

ii) The instrumentalist’s dismissal of such successes as lucky guesses fails to take into account the instrumental success of this practice of combining theories, which has proved effective on numerous occasions in physics. As observed by Stein (1989), the instrumentalist must admit on these grounds that theories are more than just tools for predicting the results of measurement; they are also (at least) tools for constructing new theories.

iii) The novel predictive success enjoyed by Dirac’s theory seems to provide a strong argument for realism, and in particular a structural realism which takes into account the mathematical nature of the reasoning involved while also providing an account of the continuity with subsequent developments in particle physics. However, I suggest that, shorn of the traditional realist’s commitment to reference, the structural realist has no greater ability to explain this success than a suitably enriched instrumentalist position.

2010 September 10
Galileo’s Refutation of the Speed-Distance Law of Fall Rehabilitated
Bryan Roberts

Galileo’s refutation of the speed-distance law of fall in his Two New Sciences is routinely dismissed as a moment of confused argumentation. We urge that Galileo’s argument correctly identified why the speed-distance law is untenable, failing only in its very last step. Using an ingenious combination of scaling and self-similarity arguments, Galileo found correctly that bodies, falling from rest according to this law, fall all distances in equal times. What he failed to recognize in the last step is that this time is infinite, the result of an exponential dependence of distance on time. Instead, Galileo conflated it with the other motion that satisfies this ‘equal time’ property, instantaneous motion.

This talk is based on a draft of a paper co-authored with John D. Norton.

2010 April 23
To do with models, explanation and the valence bond
Julia Bursten

I explore the explanatory benefits of the modern valence bond models of chemical bonding and argue that philosophical analysis of different bonding models can inform both theoretical chemistry and contemporary theories of explanation. By highlighting particular explanatory virtues possessed by modern valence bond models in contrast with those possessed by molecular orbital models, an alternative and incompatible type of bonding model, I demonstrate how both types of models inform chemical research in differently useful ways. This demonstration leads to a discussion of the need for a plurality of models to explain bonding phenomena. During this discussion, I apply the conclusions of recent pluralist accounts of philosophical explanation, drawing on Sandra Mitchell’s and Kyle Stanford’s recent work.

2010 April 16
On Dirac’s Discovery of His Electron Equation
Tom Pashby

Existing accounts of the origins of the electron equation rely almost exclusively on Dirac’s later testimony and the published paper of 1928. However, there exists a document containing rough calculations that reveals the first stages of his recognition of the explicit form of the equation he sought. This manuscript was found among the archival material held by the FSU at Tallahassee, where Dirac spent his final years. The first page contains a statement of the general form of the relativistic linear wave equation, and twenty-two pages later Dirac finds an explicit representation of what would be known as the Dirac matrices.

This document provides an unprecedented opportunity to give a comprehensive account of part of the train of thought that led to arguably his most significant discovery. I propose to present the preliminary results of a detailed analysis of this manuscript at HQ3, adding to and challenging the existing accounts.

One remarkable aspect is the appearance of what is obviously projective geometry on a number of pages. Several authors have attempted to make a connection between Dirac’s q-number theory and his knowledge of projective geometry, but in the AHQP interviews conducted by Kuhn, where this connection is first made, Dirac denied any connection to his early work in non-commutative algebra. Instead Dirac consistently emphasized the value of projective geometry as a means for visualizing quantities in Minkowski space-time, which suggests that he may have found these techniques useful in his derivation of the electron equation and first encounters with Dirac spinors.

2010 April 16
17th Century Teleo-Mechanics: Muscle, Mathematics and Animal Locomotion
Peter Distelzweig

In this paper I examine the presence of “teleo-mechanics” in three 17th Century works on muscle anatomy and animal locomotion: (i) De musculi utilitatibus by Hieronymus Fabricius ab Acquapendente; (ii) a collection of unfinished notes on muscles by William Harvey (first published with translation by Gweneth Whitteridge in 1959 as De Motu Locali Animalium, 1627) and (iii) De motu animalium by Giovanni Alfonso Borelli. By “teleo-mechanics” I mean the integration of mathematical mechanics into teleological explanations of anatomical features of (in this case) muscles.
Though some scholarly attention has been given to each of these works (e.g., Baldini 1997; Des Chene, 2005; Jaynes 1970; Stannard 1970; Whitteridge 1959, 1979), it seems the presence and nature of this teleo-mechanics in these works is underappreciated. Here I attempt to remedy this weakness, comparing the presence of this general mode of explanation across the three works, examining especially the conceptualization and justification of such explanations and their accompanying scope and character.
As necessary prerequisite to the discussion, I begin by providing a (very) brief orientation to Galenic and Aristotelian teleological explanation in the medical-anatomical tradition, the preface to the Psuedo-Aristotelian Mechanical Questions, and the Early Modern notion of a subordinate or mixed science. These will be crucial for properly understanding this 17th century teleo-mechanics and appreciating its variations.
As indicator of the interest of 17th century teleo-mechanics I will conclude by briefly considering interesting ways such modes of explanation differ in their commitments and implications from Descartes’ well-known “corpusculo-mechanical” accounts in human physiology.

2010 April 2
Comparing Probabilistic Measures of Explanatory Power
Jonah Schupbach

Recently, in attempting to account for explanatory reasoning in probabilistic terms, Bayesians have proposed several measures of the degree to which a hypothesis explains a given set of facts. These candidate measures of “explanatory power” are shown to have interesting normative interpretations and consequences. What has not yet been investigated, however, is whether any of these measures are also descriptive of people’s actual explanatory judgments. Here, I present my own experimental work investigating this question. I argue that one measure in particular is an accurate descriptor of explanatory judgments. Then, I discuss some interesting implications of this result for both the epistemology and the psychology of explanatory reasoning.

2010 February 26
Hobbes, Definitions, and Simple Conceptions
Marcus Adams

Several recent commentators argue that Thomas Hobbes’s account of the nature of science is conventionalist. Engaging in scientific practice on a conventionalist account is more a matter of making sure one connects one term to another properly rather than checking one’s claims, e.g., by experiment. In this paper, I argue that the conventionalist interpretation of Hobbesian science accords neither with Hobbes’s theoretical account in De Corpore and Leviathan nor with Hobbes’s scientific practice in De Homine and elsewhere. Closely tied to the conventionalist interpretation is the deductivist interpretation, on which it is claimed that Hobbes believed sciences such as optics are deduced from geometry. I argue that Hobbesian science places simple conceptions as the foundation for geometry and the sciences in which we use geometry, which provides strong evidence against both the conventionalist and deductivist interpretations.



2010 February 19
Two Types of Typicality:
Rethinking the Role of Statistical Norms in Folk Causal Attributions
Justin Sytsma, Jonathan Livengood, and David Rose

Empirical work on the use of causal language by ordinary people indicates that their causal attributions tend to be sensitive not only to purely descriptive considerations, but also to broadly moral considerations. For example, ordinary causal attributions appear to be highly sensitive to whether a behavior is permissible or impermissible. Recently, however, a consensus view has emerged that situates the role of permissibility information within a broader framework: According to the consensus, ordinary causal attributions are sensitive to whether or not a behavior is generally out of the norm, where being out of the norm might indicate deviation from a prescriptive norm (a broadly moral consideration) or deviation from a statistical norm (a purely descriptive consideration). In contrast, we conjecture that ordinary causal attributions are more directly connected to broadly moral judgments about normative responsibility (the responsibility view). We present the results of a series of new experimental studies that are consistent with the responsibility view, while indicating that the consensus position is seriously mistaken.



2010 February 5
Folk Intuitions and Theories of Actual Causation:
A Hitch in Hitchcock’s Account
Jonathan Livengood, Justin Sytsma, and David Rose

Using structural equations and directed graphs, Christopher Hitchcock (2007a) proposes a theory specifying the circumstances in which counterfactual dependence of one event e on another event c is necessary and sufficient for c to count as an actual cause of e. In this paper, we point out that Hitchcock is committed to a folk intuition desideratum for theories of actual causation, and that this desideratum is not satisfied by ordinary causal attributions for some simple cases. We argue that the folk intuition desideratum should be rejected with respect to theories like Hitchcock’s, and we suggest that our results are both good news and bad news for metaphysicians.



2010 January 5
Obviousness and the Existence of Qualia 
Justin Sytsma

Why should we believe in the existence of qualia? Many answers can be given to this question, but perhaps the most common is the claim that qualia are obvious and that their existence is therefore beyond dispute. In this talk, I critically assess this claim. First, I articulate the standard understanding of qualia: Qualia are certain qualities of mental states, with the prototypical examples being redness and painfulness. Second, I show that for the claim that qualia are obvious to provide a compelling justification for the existence of qualia, the claim needs to be understood in a strong way: Qualia are phenomenologically obvious or obvious just in undergoing ordinary perceptual episodes. Finally, I argue that qualia are not obvious in this way. I note that if qualia are phenomenologically obvious, then they should be obvious to just about everybody in undergoing ordinary episodes of perception. I then show that this is not the case: Through a series of experimental studies I demonstrate that most people do not think that the relevant qualities are qualities of their mental states. I conclude that qualia are not phenomenologically obvious. This conclusion serves to redirect the philosophical debate to where it belongs: the philosophical arguments for the postulation of qualia.


2009 December 3 at 5:30 pm
Explanation in Cognitive Neuroscience
Catherine Stinson

I start from the premise that in cognitive science the opposition between neuro-reductionism and functionalism underlies much of the debate. I then look at the aims of cognitive neuroscience and note that it doesn’t seem like a sensible enterprise from the perspective of either reductionism or functionalism. The main point of the dissertation then is to figure out just what it is that people in cognitive neuroscience are trying to do, and to carve out some alternative position from which these aims make sense. Neuro-reductionism has already been roundly criticized, so I don’t spend much time bashing it further. Functionalism, despite also having been roundly criticized, seems to still hold a lot of sway, so I spend a bit more time arguing that it should be rejected.

I then look at Bechtel’s attempt to understand cognitive neuroscience, which claims that the field combines functionalism with identity theory, and point out a number of problems with his account. At least the beginnings of some understanding of the field can be gleaned from it, however. Then I look at criticisms of two of the main methodologies used in the field — fMRI and computational models — and point out that these criticisms boil down to complaints that if you assume functionalism is true, then the methods seem useless. It remains unclear though, in both cases, what these methods are supposed to explain and how they go about doing it. Finally I argue that mechanistic explanation is a position from which the aims of cognitive neuroscience do make sense, and begin to unravel how fMRI and computational models can fruitfully be used as methods for discovering and elucidating mechanisms.



2009 November 18 at 5:00 pm
17th Century Teleo-Mechanics in Anatomy:
Muscle, Mathematics and Animal LocomotionNeed
Peter Distelzweig

In this presentation I examine the presence of teleo-mechanics in three 17th Century works on muscle anatomy and animal locomotion in the 17th Century:

De Musculi Utilitatibus (On the Usefulness of Muscle) by Hieronymus Fabricius ab Aquapadente (before 1619)

A collection of unfinished notes on muscles by William Harvey (c. 1627)

De Motu Animalium (On the Motion of Animals) by Giovanni Alfonso Borelli (1680)

By “teleo-mechanics” I mean the integration of mathematical mechanics in teleological explanations of anatomical features of (in this case) muscles. I will compare the presence of this mode of explanation across these three works examining (1) the conceptualization and justification of such explanations, (2) the kinds of features thus to be explained, and (3) the precise character of the explanations actually offered; I will focus on Fabricius, comparing Harvey and (more briefly) Borelli to him.

As necessary prerequisite to the discussion, I begin by providing a brief orientation to Galenic and Aristotelian teleological explanation in the medical-anatomical tradition, the preface to the Psuedo-Aristotelian Mechanical Questions, and the early modern notion of a subalternate science. These will be crucial for properly understanding this 17th century teleo-mechanics and appreciating its variations. In fact, I will suggest, an under-appreciation of this background–especially of role of teleology in anatomy–has led the small literature on this field to miss the “teleo” in teleo-mechanics, altogether (with unhappy results).

As indicator of the interest of 17th century teleo-mechanics I will conclude by considering interesting ways such modes of explanation differ from Descartes’ well-known corpusculo-mechanical accounts in human physiology.



2009 November 6 at 5:30 pm
Health Need
Jason Byron

Norman Daniels has developed an influential account of the just distribution of health care services and of what constitutes health needs. On that account, health needs are just those things we need to maintain, restore, or provide functional equivalents to (where possible) health. Health is the absence of pathology, which Daniels defines as any deviation from the functional organization of a typical member of the species or relevant species subgroup. Harmful pathologies are just those deviations in normal species functioning that restrict affected persons from a fair share of the normal range of opportunity open to them. One’s fair share of the normal range are the life plans one may reasonably choose, given one’s talents and skills (and correcting for unjust social arrangements that disadvantage some from developing their skills and talents). When one’s fair share of the normal range is restricted by harmful pathology, justice demands that society make certain efforts to restore it via access to health care services. This is because failing to restore that share of the normal range deprives individuals of their capacity to function as free and equal citizens—that is, citizens able to change their conception of the good over time and able to take advantage of their basic rights and liberties. In this way, meeting health needs is a fundamental requirement of justice.

In this talk, I criticize Daniels’s account on two grounds. First, I argue that pathologies are not in fact objectively ascribable in the sense Daniels claims. Second, I argue that objective ascribability in Daniels’s sense is not required for pathology to be objectively important. (I will not directly consider whether welfarist accounts of health, according to which health needs arise solely from preferences, could satisfactorily answer the question of equity. It will suffice for my purposes that Daniels’s non-welfarist account provides grounds on which reasoned consensus is plausible.)



2009 October 30 at 5:00 pm
Julia Bursten

I am applying for a grant to do work in philosophy of chemistry, focusing on comparing molecular orbital, modern valence, and density functional models of molecular bonding. The aim of the comparison is to see what ontological and pedagogical prejudices are contained in each model, and to argue which of these prejudices should be elevated and which suppressed in the field. In other words, first I want to articulate what each model reveals about the nature of the chemical bond itself and about how bonding is discussed in chemical teaching and research. Then I want to make arguments for preferring aspects of some models over others in these aims. I plan to ground the ontological argument in the contemporary realism-antireaism literature and the pedagogical argument in the ontological argument, coupled with a pragmatic view of the aims of explanation in the sciences. This research is in an early stage, and so I apologize for not being more specific in its aims. As far as today, I think I’ll probably tell the element 118 fraud story and wrap it up into worries about approximation techniques and generally try to articulate what the hell I want to do with my life.



2009 October 23 at 5:00 pm
Time Reversal and the Symmetry of Nothing
Bryan Roberts

I show how the antiunitarity of the time reversal operator in quantum mechanics can be determined from symmetry considerations. In particular, I show that antinuitarity follows if, in the absence of forces and interactions, the covariance group of the laws of physics is the same as the symmetry group of empty spacetime. The usual time-reversal properties of position and momentum follow. And on this approach, it follows that ordinary quantum mechanics is indeed time reversal invariant, contrary to recent remarks by David Albert.



2009 October 16 at 5:00 pm
Nature’s New Clothes: on the Existence of Laws
Balazs Gyenis

My aim is to discuss the feasibility of Humeian accounts of laws of nature, according to which laws merely supervene on more fundamental non-nomic facts. I’m going to focus on the following question (which, as I shall argue, these accounts need to successfully address): is it true that it is possible to obtain a law-based description of the fundamental non-nomic facts whatever the distribution of these facts might turn out to be? I’m going to present several results – both positive and negative – which address the various natural ways how this question can be precisely formulated, complete with a discussion of the notion of a “law-based description”. Some of these results might strike as surprising. In the end I’m going to argue that, from a Humeian perspective, some of the findings are directly relevant to the question whether mathematical models featuring initial value indeterminism, such as Norton’s Dome, are representing physically possible systems. I’m going to develop some formalism but the talk is not math-heavy and the intended philosophical points are accessible even on the basis of a crude understanding of the technical details.



2009 October 9 at 5:00 pm
Producing Robust Data with a Single Technique

Greg Gandenberger

Scientists use techniques that produce “raw data” that requires substantial interpretation. In many cases, it is impossible to discover or test by direct observation methods of interpreting that raw data. In those cases, it is natural to assume that the justification for a particular method of interpretation must come from a theory of the process that produces the raw data. Contrary to this view, scientists have many strategies for validating a method of raw-data interpretation. Thus, it is possible to produce a robust body of data with a single technique. I illustrate and support these claims with a case study of the introduction of the cathode-ray oscillograph into electrophysiology.



2009 September 18 at 5:30 pm
A Poor Relation: A Critical Look at Relational Quantum Mechanics

Tom Pashby

In this paper I argue i) the strongly relational position that Rovelli adopts in Relational Quantum Mechanics (1996) is justified neither by the analogy special relativity, nor by the central observation about quantum mechanics on which it is based ii) that Rovelli fails to correctly apply the technical concept of Shannon information and so should be understood in terms of the epistemic concept, which is generally understood to be factive. This leads to a dilemma articulated by Chris Timpson concerning what the information contained in the wavefunction concerns, and I argue that the answer offered by Rovelli is essentially instrumentalist. I conclude with some remarks about possible modifications of the interpretation to accommodate these complaints.



2009 April 17 at 5:00 pm
Structural Group Realism

Bryan Roberts

There is a precise form of structural realism that identifies ‘structure’ in quantum theory with symmetry groups. However, working out the details actually illuminates a major problem for structural realism. This paper argues that, once a precise characterization of structure is given, the ‘metaphysical hierarchy’ on which structural realism rests is difficult to make sense of, if not completely incoherent.



2009 March 20 at 5:30 pm
The Justification of the Probability Measure (in Statistical Mechanics)

Elay Shech

This talk has to do with the justification of the probability measure chosen to describe the possible microstates, or unobservable aspects, of a system. The question I’m concerned with is: What could justify, i.e. what grounds might one have for a belief that some probability distribution correctly describes the microstate of a system?

I’ll begin with taking up a specific argument, by Davey 2008, which claims that beliefs about the correct probability measure describing the microstate of a system are not justified. More specifically, Kevin Davey claims that the justification of the second law of thermodynamics as it is conveyed by the ‘standard story’ of statistical mechanics, roughly speaking that low entropy microstates tend to evolve to high entropy microstates, is “unhelpful at best and wrong at worst.” I will show that Davey’s argument for rejecting the standard story commits him to a form of inductive skepticism that is more radical than the position he claims to be stating (at best) and that Davey places unreasonable demands on the notion justification in the physical sciences (at worst).

If there is time left I hope to discuss various directions for further research concerning a positive claim – the grounds for justifying the probability measure chosen to describe the microstate of a system. This talk will not be technical.



2009 March 6 at 4:30 pm
Newton’s Empiricism and the Changing Metaphysics of Void Space

Zvi Biener

In definitions written for possible inclusion in the third edition of Book III of the Principia, Newton defined both “body” and “vacuum” in terms of resistance: body is that which gives resistance, vacuum is the place in which body can move without resistance. Curiously, Newton was vehement that these definitions were not the only possible definitions of body and vacuum, but merely the ones with which he was concerned in the Principia. About “other sorts of bodies and another sort of void”, he wrote, “let authors in other sciences dispute”. This admission is stunning. Newton had struggled throughout his career to precisely define ‘body’ and ‘void’. But while he had often relativized his concept of body to the project of the Principia—other physical theories may hypothesize other sorts of bodies—until these draft definitions he had never done so for the concept of void. In fact, in the earlier anti-Cartesian De Gravitatione, he even portrayed his account of void space as the only metaphysically possible one!

I argue that these definitions—as well as a small change in the scholium on space and time—betray a subtle change in Newton’s metaphysics of void space. In particular, I argue that Newton began to question a claim he had initially articulated in De Gravitatione ; namely, that void space was lacking all agency and thus categorically distinct from substance. This change was caused by a co-relative change in the epistemic status of void space vis-a-vis Newtonian mechanics.

These claims are supported by two threads of argumentation. First, I show that Newton’s conception of space in both De Gravitatione and the scholium on space and time was supported by an argument concerning the geometrical structure of space as well as an argument concerning space’s lack of agency. Although the two arguments are intertwined, they are methodologically independent. While the first concerns the conceptual necessity of absolute space’s geometry for physical theory, the second concerns empirical evidence regarding the vacuity of the celestial spaces. Because of this methodological independence, when in the 1710s Newton came to doubt the validity of his arguments concerning the vacuity of space (expressed in revisions to Prop. 6 of Book III of the Principia), he could question whether space was necessarily inert without throwing into doubt space’s geometrical structure. Space’s lack of agency, however, was also used in De Gravitatione to support the ontological necessity of space. It was on the basis of this inertness and the claim that substance is “an entity that can act” that Newton argued for space’s being “neither substance nor accident” and so a necessary, divine emanation. Thus, when he began to doubt the vacuity of the celestial spaces, he was also committed to rethinking his position regarding space as an emanative effect.

The second thread of argumentation concerns Newton’s preferred method of reasoning in natural philosophy. Newton often claimed that his method is that “of the geometers”. On this method, terms are used only in accordance with their precise definitions. Their vulgar use, if it exists, is ignored . However, in De Gravitatione and the scholium Newton avoided using this method in his treatment of space. In fact, in the first two edition of the Principia he explicitly held that since “time, space, place, and motion” are “very well known to all”, he shall not define them. Rather, in order to treat space Newton followed a common dialectical strategy: he took a familiar (if not precise) concept and showed through a series of arguments what could and could not be properly said of it. For Newton, the concept of space was thus the same as that of his predecessors, but cleansed of their errors and misconceptions. Yet Newton’s treatment of space changed in the 1710s. During this period, because of his increasing doubts regarding the nature of void space, Newton came to believe that space itself must be subjected to “the method of the geometers”. On Newton’s understanding of this method, foundational theoretical terms (like “space”) are not defined a priori, but are defined a posteriori through the machinery of physical theory. This relativizes their application to the scope of the physical theory in question. Because of this understanding of the “manner of the geometers”, Newton’s concept of space ceased to have a general, metaphysical application (as it did in De Gravitatione) and became relativized to the framework of the Principia.



2009 February 27 at 5:00 pm
Closed Timelike Curves

Bryan Roberts

We explore the possibility that closed timelike curves might be allowed by the laws of physics. A closed timelike curve is perhaps the closest thing to time travel that general relativity allows. We will begin by reviewing just what closed timelike curves are, and in what kinds of contexts they were first shown to appear. We then explore how one might actually travel on a closed timelike curve, and discuss two recent no-go results which suggest that this endeavor is impossible.



2009 February 27 at 5:30 pm
How Math Condemned Aristoxenian Harmonics
and Why it Shouldn’t Have

Julia Bursten

Aristoxenus of Tarentum (ca. 4th century BCE), a student of Aristotle, proposed a unique way of characterizing harmonic intervals. His method, unlike those of his contemporaries, relied heavily on the testing of consonance and dissonance by the ear. This school of harmonic thought received much criticism during the millennium following Aristoxenus’ life, and the first defenses of his harmonics do not appear in Western music theory literature until the 16th century CE. In this paper, I outline the content of Aristoxenus’ method of distributing harmonic intervals over an octave by reconstructing the arguments given in Aristoxenus’ only surviving musical text, the Elementa Harmonica. I contrast this method with that of Pythagoras, and I follow later literature on music theory that holds up Pythagoras’ method while putting down Aristoxenus’. I introduce the first defenses of Aristoxenus’ system, which appear during the middle of the Italian Renaissance and originate in the musical thought of a group known as the Florentine Camerata. Finally, I offer a new way of describing the fundamental distinctions between the Pythagorean and Aristoxenian schools of harmonic science that is implicit but unacknowledged in both Aristonexus’ own writings and in the criticisms he has received.



2009 February 20 at 5:00 pm
Actual Causation in Simple Voting Scenarios

Jonathan Livengood

One might want a theory of causation for a variety of reasons. For example, one might want to make predictions about interventions (policy predictions). Or one might want an account of explanation, either in ordinary language or in science. These purposes are best-served by type or generic theories of causation, like those presented in Pearl (2000) and Spirtes et al. (2000). Recently, the basic ideas set out by these authors have been used by Woodward (2003), Halpern and Pearl (2005), Glymour and Wimberly (2007), and Glymour et al. (ms) to produce accounts of actual causation.

Whereas generic causation is about what causes what in general, actual causation is about what caused what in a specific circumstance. In general, smoking causes lung cancer. Still, one might wonder whether the Marlboro Man’s smoking caused his lung cancer. Actual causation is central both to law and to history. Determining the actual cause (cause in fact, material cause, or conditio sine qua non) of damages is central in the practice of tort law (see Hart and Honore 2002). More generally, questions of moral responsibility depend on facts about actual causation. In historical research, explaining why specific historical events (e.g., the Boshin War, the Boston Tea Party, or the coronation of Napoleon) occurred, determining the relative contributions of various actors to history (e.g., Caesar, Galileo, or Tesla), or deciding the truth-value of historical counterfactuals (e.g., would Japan have surrendered had the United State not dropped atomic bombs on Hiroshima and Nagasaki) are all problems that turn on facts of actual causation.

The present paper considers some odd causal judgments about simple voting scenarios that follow from the theories proposed in Woodward (2003) and Halpern and Pearl (2005). I note three factual impediments to the generality of these models, which arise in the cases considered. I consider possible experiments to better characterize ordinary judgments about actual causation in simple voting scenarios.



2009 February 13 at 4:30 pm
Relational Quantum Theory

Tom Pashby

Carlo Rovelli’s Relational Quantum Mechanics is a bold attempt to resolve the measurement problem by denying that there exist objective quantum states – instead, physical quantities only take values relative to a particular observer. Rovelli claims Einstein’s Special Theory of Relativity as his inspiration, and model. I examine and compare his motivation and method with that of ‘On the Electrodynamics of Moving Bodies’ and note several key disanalogies. The body of the talk is an attempt to understand the position he espouses in the original 1996 paper in comparison with the recent 2007 ‘Relational EPR’ and in the context of his original motivation. In particular, I am concerned by his use(s) of the term ‘information’ and his treatment of the consistency requirement in EPR.



2009 January 9 at 5:30 pm
How to Solve the Regress of Justification Problem:
Justification as a Three-valued Variable

Peter Gildenhuys

I argue that there are three, rather than just two, justificatory statuses: justified, unjustified, and gap. I present a full inferential model of justification, providing explicit criteria for the deployment of each justificatory status along with explicit implications for each status. A trivalent model of justification accurately captures actual justificatory practice and is not subject to the same objections as are models that deploy only two justificatory statuses.

2008 November 21 at 6:30 pm
A New Perspective Concerning Experiments on Semantic Intuitions

Justin Sytsma and Jonathan Livengood

In two fascinating articles, Machery, Mallon, Nichols, and Stich (2004, forthcoming) use experimental methods to raise a specter of doubt about reliance on intuitions in developing theories of reference which are then deployed in philosophical arguments outside the philosophy of language. Machery et al. ran a cross-cultural survey asking Western and East Asian participants about a famous case from the philosophical literature on reference (Kripke’s Gödel example). They interpret their results as indicating that there is significant variation in participants’ intuitions about semantic reference for that case. We argue that this interpretation is mistaken. We detail a type of ambiguity found in Machery et al.’s probe but not yet noted in the response literature. We argue that this epistemic ambiguity could have affected their results. We do not stop there, however: Rather than rest content with a possibility claim, we ran four studies to test the impact of this ambiguity on participants’ responses. We found that this accounts for much of the variation in Machery et al.’s original experiment. We conclude that in the light of our new data, their argument is no longer convincing.



2008 October 31 at 5:00 pm
Group Selection Does Not Explain Anything

Peter Gildenhuys

Group selection is often invoked to explain the persistence of altruism in natural populations. In this talk, I show how to explain the persistence of altruism in natural populations without invoking group selection. I then show how putative explanations of the persistence of altruism that invoke group selection fail to specify under what conditions the core mathematical model used to explain altruism can be deployed. Since the explanation of altruism in nature depends crucially on mathematical models that demonstrate it persistence, “group selection” cannot explain the persistence of altruism in natural populations.



2008 October 24 at 6:00 pm
Harvey’s De generatione animalium and the New Science

Benny Goldberg

In this presentation I shall discuss the woefully neglected last work of William Harvey, De generatione animalim (1651). My goal is twofold. First I describe and explain Harvey’s argument for epigenesis, the theory that an embryo develops over time, part-by-part. This theory is, as it turns out, the correct one, but until the late 18th century it had almost no proponents besides Harvey and Aristotle. This brings us to the next goal for the presentation, arguing that Harvey’s Aristotelianism provided a better home for his biological research than the burgeoning philosophy of mechanism and corpuscularianism. The main reason for this is that, given the metaphysical pictures and explanatory schemes of mechanism and corpuscularianism, epigenesis turned out to be a wholey inexplicable and mysterious phenomena, and thus it was rejected in favor of the doctrine of preformation. I will conclude with some thoughts about Harvey, mechanism, and early modern life sciences.



2008 October 24 at 5:00 pm
De Artificio Mechanico Musculorum:
 The Mechanical Problems in William Harvey’s De motu locali animalium
Peter Distelzweig

Most scholarly attention given William Harvey has centered on the De Motu Cordis of 1628 and its reception. This focus is not without reason, as it is to this work that his near contemporaries gave most attention and (eventually) praise. Disproportionately little attention has been given to his other, much more massive publication on the generation of animals; and even less still has been given to his various unpublished notes. Among these latter is a collection of notes on the local motion of animals and the anatomy of the motive organs, particularly muscle; it was edited and translated by Gwenneth Whitteridge and published as De Motu Locali Animalium, 1627 (MLA) in 1959. Though generally overlooked in the limited literature on these notes, Harvey displays here a significant preoccupation with mechanics and the role of mechanical reasoning in anatomy. In these notes we find a number of more (and sometimes less) obvious allusions to the pseudo-Aristotelian Mechanical Problems and a clear sketch of the nature of mechanics and its place in the study of the anatomy of muscles. In this paper I examine this feature of MLA, focusing particularly on how Harvey conceptualizes the place of mechanics in anatomical explanation. Besides its intrinsic interest, this work sheds some light on the complicated interaction between Harvey’s Aristotelian project in anatomy and the developing mechanical philosophy by pointing to an underappreciated element of the situation: Harvey’s familiarity with the sixteenth and seventeenth century tradition of pseudo-Aristotelian mechanics.



2008 October 17 at 6:00 pm
John Dalton: From Puzzles to Chemistry by Way of Meteorology
Karen Zwier

Historical research on John Dalton has been dominated by an attempt to reconstruct the origins of his so-called “chemical atomic theory”. This enterprise has encountered much difficulty, often blamed on the poor condition, uncertain chronology, and contradictions of extant manuscripts. Although difficulties with the Dalton manuscripts are a serious problem for historical research, I argue that there is a much greater problem in the existing literature: the misguided assumption that Dalton’s research was guided by the pursuit of the chemical atomic theory. In this paper, I cast doubt on methods which attempt to summarize the atomic theory in a concise way. I also criticize any approach which views Dalton’s atomic theory as the pinnacle of his career toward which all his prior research was supposedly directed. I propose a different approach that makes use of the manuscripts in a more productive way, and results in a better picture of his life and research. I show that Dalton’s chemical work grew out of two previous interests that he maintained throughout his entire life: puzzle solving and meteorology. A childhood fascination with solving intellectual puzzles shaped his approach when he took up an interest in meteorology in his twenties; Dalton then began to see the constitution of the atmosphere as a “puzzle”. In working on this great puzzle, he gradually turned his interest to specifically chemical questions. In the end, the puzzles that he worked on required him to create his own novel philosophy of chemistry that he is known for today.



2008 October 17 at 5:00 pm
John Burdon Sanderson Haldane’s Intellectual Heritage
Thomas Cunningham

John Burdon Sanderson (J. B. S.) Haldane is a celebrated scientist for his accomplishments in evolutionary theory and for his popular scientific writings. A recent historical thesis asserts that recognizing the significant changes in Haldane’s philosophical thinking over his lifetime leads to an increased appreciation of his scientific and political publications. In this presentation I relate and assess these contentions. I argue that we cannot appreciate the dynamics of Haldane’s intellectual commitments without a richer appreciation of his writings and the formative context of his philosophical outlook. Gaining this assessment requires rethinking of his father, John Scott (J. S.) Haldane’s, philosophical positions as well as J.B.S.’ assessment of them. After reconsidering many of J. B. S.’ public statements concerning his philosophical beliefs as well as his relationship with his father’s views, I conclude that J.B.S.’ views exhibit far more continuity than has been claimed by others, and that he never underwent any fundamental changes in his philosophical outlook. Furthermore, I complicate the position that Haldane exhibited a commitment to the metaphysics of Marxism. Instead, I argue that he was an agnostic monist, explain what this means, and how it affects our understanding of this celebrated biologist whose achievements were influential components of the modern evolutionary synthesis.



2008 September 9 at 5:00 pm
M.S. Tswett and the Experimenters’ Regress
Jonathan Livengood

Mikhail Tswett (1872-1919) was a Russian botanist. His interest in chlorophyll led him to develop chromatographic adsorption analysis, a separation technique in physical chemistry, in the years 1902-1903. In subsequent years, Tswett applied his technique to analyses of chlorophyll, but the wider scientific community did not accept his technique as reliable until the 1930s. In defending my explanation of the rejection of chromatography in an earlier talk, I dealt with the objection, posed by Glymour and Gyenis, that Tswett’s technique was rejected because he did not carry out a calibration experiment showing that chromatographic analysis produced similar results to entrenched techniques on samples of known composition. Granting that Tswett did not carry out the experiment described by Glymour and Gyenis, I argued that Tswett did calibrate his technique and did so in a way that better fit his problem.

This talk expands on my earlier story by connecting Tswett’s practice with the so-called Experimenters’ Regress. I argue that one can explain Tswett’s neglect to carry out Glymour and Gyenis’ calibration experiment (henceforth GG-calibration) by considering how the Experimenters’ Regress might be applicable to the problem in chlorophyll chemistry Tswett was facing. I argue that even if Tswett had carried out the suggested calibration experiment, it would not have defeated the only relevant objection to his technique as applied to chlorophyll. I go on to show how Tswett overcame this difficulty by carrying out a different kind of calibration experiment. I then argue that Collins’ account of the Experimenters’ Regress is incorrect insofar as it leaves out some background suppositions of experimental work, including basic ideas about causation. On my account, the Experimenters’ Regress turns out to have a legitimate role to play in scientific inquiry, but it does not have the sort of skeptical consequences envisioned by Collins.



2008 September 9 at 5:30 pm
Van Fraassen, Representation and Experimentation
Thomas P
ashby

I examine van Fraassen’s dialectical argument against the structural realist in his 2006 (Representation: The problem for Structuralism), and conclude that there is a move available that he does not consider, namely that the realist can retreat to the position that, while no abstract representation can be distinguished as true simpliciter, this does not entail that we are unable to choose between them. This is the position taken by Don Ross’ ‘Rainforest Realism’. I explore this approach, and conclude that it is only through an appeal to ‘real’ foundational modal structure that it may be rescued from charges of instrumentalism (as Ross and Ladyman propose).



2008 April 22 at 5pm
Reducing Cognitive Control to Chunks of Cortex

Catherine Stinson

Attention is typically divided into two types that roughly correspond to ‘bottom-up’ and ‘top-down’ processing. Although attention is a heavily researched topic within psychology and neuroscience, progress has only recently begun to be made on the top-down variety, called ‘executive attention’ or ‘cognitive control’. This variety of attention is thought to be the capacity that manages the goal-directed planning of complex action sequences, and that directs the allotment of processing resources during introspection. Much of the research in this field points towards these capacities being implemented by particular regions of frontal cortex, including anterior cingulate cortex. Although most of the conclusions being drawn in this area of research are modest–that this area is “involved in” executive attention, for example–some of the labs working in this area are professing to have located the “source” of attentional control. By this they seem to mean that they have reduced attentional control to the activities of a particular chunk of brain (or at least that they have taken an important first step in localizing the chunk of brain, even if it remains to work out the mechanisms functioning there). In this paper I examine these claims and the evidence being put forward for them. It is not immediately clear what notion of reduction is being invoked in these claims, nor what sort of explanation is being offered. To answer these questions, I look to various papers on reduction and mechanisms by Schaffner, Machamer, Darden, and Craver. I argue that the attempt at reduction exemplified in recent research on executive attention relies on several problematic assumptions and confusions. I will also discuss broader questions about the status of explanations in psychology, and what sorts of links can and should be made between these and neurophysiological theories.



2008 April 18 at 5pm
Maxwell demons and the epistemology of thought experiments

Rawad Skaff

How can we gain knowledge about nature just by thinking about an imaginary, hypothetical or counterfactual, situation or scenario? This is the main puzzle concerning the epistemic power of thought experiments (TE).

I will start by analyzing the different TEs known as “Maxwell demon” as well as their purposes. My first aim is to test a criterion required by most philosophers in the literature on TEs: that the scenario of the TE is “possible”. Generally, the possibility criterion is required in order to justify the use of TEs and to sort successful TEs from unsuccessful ones. In other words, it is claimed that an impossible scenario leads to an unsuccessful TE since it is hard to see how we can obtain knowledge from them. I argue that the possibility of the scenario fails as a criterion for their success. My conclusion is based on the claim that Maxwell original demon is a good TE. This will bring me to my second aim which is to test several approaches to TEs in science; in particular those who claim that TEs are logical arguments or limiting case of ‘real’ experiments. If my claim concerning Maxwell original demon is correct than TE can not be logical arguments neither limiting cases of experiments.



2008 April 4 at 4pm
The Causal Markov Condition: Should You Choose to Accept It?

Karen Zwier

The Causal Markov Condition (CMC) is an axiom specifying a relationship between a causal graph and the probability distribution over its vertices. In recent years, there has been much debate (particularly in BJPS) over this axiom. In this talk, I classify the criticisms against the CMC into two groups. The first type of criticism is metaphysical: how do we know that causal relationships, in reality, always exhibit the precise statistical relationships specified by the CMC? Could there not be situations, for example, in which two distinct effects of the same cause are correlated, even when all common causes are given? The second type of criticism involves the application of the CMC: Why should we continue to use the axiom when there are many known “counterexamples”?

In regard to the first type of criticism, I will not attempt to argue on a metaphysical basis that causal processes must be independent. I will argue, however, that use of the CMC is reasonable given the aims of science. I will also give several arguments to motivate its use. In regard to the second type of criticism, I will confirm the accusation that CMC is not applicable in every case. I will, however, show that many cases that have been cited as “counterexamples” of the CMC are actually instances of misapplication or misunderstanding. Furthermore, I will argue that the strength of the CMC lies precisely in an ability to discern appropriate situations for its use.



2008 March 21 at 5pm
Types of Domains of Knowledge

Selja Seppälä

The research project I am presenting is part of my doctorate dissertation on the conceptual characteristics and structure of terminological definitions. Terminology is both an applied activity and a theoretical discipline. As an applied activity, it consists in writing mono- or multilingual dictionaries for specialized domains (sciences, activities, practices, etc.). Thus, definition writing plays an important role in conveying the knowledge (the concepts) of the described fields and their overall organization (the underlying conceptual system). The writing of these definitions is done through the synthesis of a number of defining or knowledge rich contexts extracted from domain specific texts (MEYER 2001). Though subject to a small number of formal restrictions (such as a single sentence), their writing does not follow precise rules, at least at the content level, leaving this activity to the intuition and experience of the terminologist. Therefore, open questions for work in the field of terminology, as a theoretical discipline, are the following: what makes a context more relevant than another to define a certain concept? Moreover, since the definition is rather limited in space, what are the characteristics of the concept to be included in the definition?

One of my working hypotheses is that the conceptual structure of a definition is at least partly dependent on the type of domain to which it is attached. To be tested, it needs however further clarification: what are types of domains of knowledge? My main objective is, therefore, to try to find what characteristic features could be used to define different types of domains. I will first see what is meant by domain in terminology. I will then make a brief overview of some attempts to define types of domains and examine other possible criteria that could serve that purpose. I will finally delineate possible bottom-up methods to empirically define types of domains.



2008 February 8 at 4pm
Status Bias and the Matthew Effect in Philosophy

Jason Byron

The “Matthew Effect” was introduced by Robert Merton to describe the disproportionate credit high-status scientists get for their work, at the expense of low-status scientists for work of the same or similar quality. The effect can be generated by biased attention (if the scientific community is not be aware of the work of low-status scientists and so accords it little credit) or by biased assessments of work already known to the community in favor of high status scientists. I present data from top philosophy journals that show a very strong negative correlation between blind review policies and the percentage of high-status authors published (r = -0.90, P = 0.001). This result suggests that philosophers assess the quality of research papers at least in part according to author status, even when carefully attending to the content of the papers, as for example during peer review. I argue that this form of status bias, even if appropriate in science, is generally inappropriate in philosophy, given general differences in the epistemic communities of scientists and philosophers.

2007 October 19th at 5pm
Categorizing Causal Influences

Peter Gildenhuys

In this WIP talk, I will discuss how to categorize causal influences on populations over which selection theory is being deployed in a manner that promises to show how causal descriptions can be linked with mathematical models.



2007 October 5th at 5pm
Peter Distelzweig

In a number of places in the corpus Aristotle discusses the already developed disciplines that have come to be called the mixed or middle sciences: astronomy, harmonics, optics and mechanics. In Physics II.2 Aristotle calls them “the more natural branches of mathematics”. They are mathematical disciplines which treat, however, a particular domain of the natural world. In this they differ from universal mathematics, and even geometry and arithmetic, which are not so connected to a particular natural domain. Aristotle finds these mathematical sciences of particular interest for a number of reasons. Their character, thinks Aristotle, provides strong evidence for his account of the nature of mathematics, particularly in contrast to that of the Platonist. These sciences are also of interest to Aristotle because they straddle natural and mathematical science. As such, the natural mathematical sciences allow for exceptions to certain restrictions on demonstrative science and introduce a disciplinary divide between knowledge of the facts and phenomena exhibited in their natural domain on the one hand, and on the other hand, the proper demonstrative knowledge of the causes or reason why the various facts hold. His treatment of these sciences reflects Aristotle’s understanding of an important topic in the history of natural science—the relationship between mathematics and scientific knowledge of the natural world. However, the rather fragmentary character of his discussion of these issues has made it difficult for historians to piece together his view. In this paper I provide an account of Aristotle’s understanding of these sciences: what the proper subjects of the mixed sciences are, what attributes they consider, and the nature of their demonstrations. This will require a brief examination of Aristotle’s account of mathematics and the structure of demonstrative science. With these in place we can understand Aristotle’s account of the structure of the mixed sciences and their relationship to mathematics and natural science. I will conclude with a brief comment on the merits and applicability of such an account to the “New Science” of the 17th century.



2007 September 20th at 5pm
Alan Turing and the Machine-Based View of the Mind

Christian Beenfeldt

In this paper foundational issues in Alan Turing’s philosophy of mind are discussed. It is argued that his view must be understood as flowing from a certain proto-functionalist position. Central to this position is a doctrine I term Turing Mechanism. In brief, this doctrine states (1) that the human brain can be fully understood as a discrete state machine capable of being imitated by a universal machine, (2) that there exists an identity relationship of brain function to mentality, and (3) that digital computers in principle will be able to think by virtue of their ability to predict and mimic the discrete state functions of the human brain. It will be shown how an explicit identification of this doctrine increases our understanding of the famous, but often misunderstood, Turing Test. Finally, it is argued that Turing mechanism and Putnam’s subsequent machine functionalism are two substantially similar views–and that Turing, rather than Putnam, well could be regarded as the father of (machine) functionalism in recent intellectual history.



2007 April 27th at 5pm
On the Alleged Impossibility of Bayesian Coherentism

Jonah Schupbach

Luc Bovens and Stephan Hartmann present an “impossibility result” against Bayesian Coherentism. This result putatively shows that coherence is separable if and only if it cannot be given a probabilistic, complete and transitive ordering relation. Bovens and Hartmann intend their result to apply to any such ordering, and thus to any proposed order-inducing probabilistic measure of coherence. Underlying their notion of separability – and thus underlying their impossibility result – is Bovens and Hartmann’s introduction and support of a set of specific ceteris paribus conditions. In this paper, I argue that these ceteris paribus conditions are not clearly appropriate. Certain proposed coherence measures not only motivate different such conditions but they also call for the rejection of at least one of Bovens and Hartmann’s conditions. I show that there exist sets of ceteris paribus conditions which, at least prima facie, have the same intuitive advantages as Bovens and Hartmann’s conditions but which also allow one to sidestep the impossibility result altogether. This shifts the debate from the merits of the impossibility result itself to the underlying choice of ceteris paribus conditions.



2007 April 13th at 5pm
Searching for Evidence of Phenomenal Consciousness
Justin Sytsma

Recent scientific work aiming to give a neurobiological explanation of phenomenal consciousness has largely focused on finding neural correlates of consciousness (NCC). The hope is that by locating neural correlates of phenomenally conscious mental states, some light will be cast on how the brain is able to give rise to such states. In this paper I argue that NCC research is unable to produce evidence of such neural correlates. I do this by considering two alternative interpretations of NCC research—an eliminativist and a disjunctivist interpretation. I show that each of these interpretations is compatible with the scientific data and yet is more parsimonious than accounts involving the supposed phenomenon of phenomenal consciousness.



2007 March 16th at 5pm
Sketch of a Theory of Emergence

Sam Thomsen

Traditionally, emergence has been employed to militate against the unity of science. With this fundamental purpose in mind, I have been considering the possibility of an epistemological, complexity based definition of emergence. My proposed definition, to put it roughly, is that an emergent domain is one that is both capable of universal computation and efficiently simulating the knower (or more precisely, the “prediction machine”) in question. The power of this formulation, I suggest, is that it gives the emergent domain the capability of “diagonalizing” and hence frustrating the prediction the machine. I have been working on a series of theorems to show that this would give us the three things we most want from emergent domains: (1) the possibility of endless new special sciences concerning such domains, (2) the non-derivability of these sciences from lower-level sciences, and (3) a need to posit new, higher-level entities. At this point I have some candidate proofs of these theorems roughly worked out, and I’m currently working on a precise foundation for my theory of prediction machines. The first part of my talk will be geared toward the less mathematically inclined who are interested in emergence. The second part will concern some of the basic mathematical ideas I’m trying to develop, and some toy examples of domains and predictors.



2007 March 2nd at 4pm
Keith Bemer

In his 1989 article “Extragalatic Reality: The Case of Gravitational Lensing,” Ian Hacking defends what he describes as a “modest astrophysical antirealism” based in part on arguments presented in his 1982 “Representing ans Intervening.” Since astronomical entities – stars, quasars, nebulae, etc. – presently and, quite likely, permanently elude our abilities to put them to use in an experimental setting, it seems that Hacking’s brand of “experimental realism” cannot apply to astronomical phenomena/entities. By way of a detailed examination of the phenomenon of gravitational lensing, he concludes that astronomy (astrophysics, cosmology, etc.) is not a natural science on par with e.g. particle physics, biology, etc. Despite the major advances in observational technology, Hacking claims that “the method of the science is the same as that of astronomy in Hellenistic times,” and that, due to its inability to interfere and manipulate its objects in order to understand them, the aim of astronomy is merely “to save the phenomena” – an aim which he considers (pace van Fraassen) to be “an entirely subsidiary aspect of scientific activity.”

In a 1993 article entitled “Astronomy and Antirealism,” Dudley Shapere takes Hacking to task, arguing that astronomy is a science on par with the others and that Hacking “has misconceived the meaning and role of experiment in science, and thus the scientific enterprise and its history generally” (bold claim!).

What is the status of astronomy compared with the other, more (literally) mundane, sciences? Does the fact that we cannot experiment on (with?) astronomical entities compromise the epistemic status of scientific claims in astronomy? Or, alternatively, is there a sense in which we can perform astronomical experiments? How important of a distinction is there between observation and experiment? And what of Hackings claim that the aim of science does not “aim” at “saving the phenomena?” Does our modern notion of science fundamentally include some sense of manipulation and/or interference through contrived experience, which astronomy lacks?



2007 February 16th at 4pm
Liz Irvine

Block’s distinction between Access and Phenomenal consciousness, (see 1995, 2001, 2005), states that there is at least a conceptual distinction between the what-it-is-likeness of an experience (P consciousness), and its information processing counterpart (A consciousness). Snodgrass and Block claim that Snodgrass’s Signal Detection Theory approach to perception provides a method of operationalising P consciousness, and an example of P without A consciousness in the exclusion-failure paradigm. It will be argued on several grounds that the claim by Snodgrass and Block is flawed and is yet another example of cross-talk and confusion on the meaning of terms. First, the inference from the fact that motivation enables subjects to use previously ignored perceptual information to the claim that these previously ignored perceptions were P but not A conscious, is unjustified. An equally good explanation is that motivation enables the use of previously completely unconscious perceptions, thus making them both A and P conscious. Second and more importantly, it will be argued that the characterisation of ‘weakly conscious’ percepts in Snodgrass’s model implies that ‘weakly conscious’ percepts refer to low-grade percepts that exhibit qualities of both A and P consciousness, rather than being purely P conscious percepts. Signal detection theory (SDT) is essentially concerned with graded conscious experience; to interpret low-grade perception as P but not A conscious is to remove the functional properties of low grade perception and is antithetical to the core of SDT. In conclusion, Snodgrass’s SDT model as applied to exclusion-failure paradigms does not illustrate an example of P without A consciousness, and in fact supports instead the existence of graded levels of consciousness that are both A and P conscious ‘all the way down’.



2007 February 9th at 5pm
Interpreting Microscopic Evidence in 19th c. Neuroanatomy
Catherine Stinson

Cellular theory came late to neuroscience. Cajal is largely credited with finally establishing that nerve cells are the main anatomical units in the brain through his taming of the unruly Golgi staining method. This result fell hard on the ears of Golgi, who had been, and remained, a steadfast supporter of the opponent reticular theory. It remains mysterious why Golgi stuck firm to reticular theory long after every other major figure in the field had accepted the so-called “neuron doctrine,” and the evidence for it was overwhelmingly taken to be convincing. Also mysterious is how Golgi could have repeatedly stated in print and elsewhere that he accepted the claim that was the neuron doctrine’s defining feature, while still claiming, sometimes almost in the same breath, that the neuron doctrine was wrong, and furthermore how his detractors could have failed to notice that he in fact seemed to accept the claim that they insisted upon. The only plausible explanation that has been offered is that Golgi was talking about function, while Cajal was talking about anatomy, and so they did not so much disagree about the phenomena as about the terminology or the interpretation. What I plan to explore is another (compatible) explanation, which is that they were “seeing” differently, seeing something different, or interpreting what they saw differently, and I want to figure out why and how this happened. I will discuss my plan to recreate some of their original experiments, and some possible angles from which I might approach the project. What I’d like to hear from you are suggestions of what I should read as historical and philosophical background, and views on which approaches seem most likely to produce a passing comp paper.



2007 February 2nd at 5:30pm
What Structure Is Not
Bryan W. Roberts

What is the relationship between world and physical theory? Worrall’s “structural realist” answer to this question has taken some new twist in recent years, perhaps most conspicuously with the development of a more refined notion of “structure”. This talk begins with a brief primer on post Worrall structural realism and its motivation, followed by a discussion of the group-theoretically inspired structures being proposed. I then present what may be a serious limitation for these structures within the structural realist account, which stems from a special kind of permutation on their elements. I end with a discussion of the conclusions that may be drawn from this result.

2006 November 17th at 5pm
Mirror Neurons and Extracellular Single Cell Recording
Feisal Ben Hassel



2006 October 20th at 5:30pm
Why was Chromatographic Analysis Initially Rejected?
Jonathan Livengood

In 1903, the Russian botanist M.S. Tswett discovered a chemical separation technique, which he called chromatographic adsorption analysis. Today, chromatography is a ubiquitous and instrumentally sophisticated chemical technique, but it was initially rejected by chemists and botanists. Not until 1931 did it begin to find a place in the research of prominent scientists. My talk provides an explanation of this rejection superior to those on offer in the (limited) existing literature. I will consider some of the chemical and botanical context of Tswett’s discovery and analyze a protracted debate he carried on with a prominent chemist of the period: Leon Marchlewski. At the end of the talk, I will sketch some of the numerous areas of research still open in this area.



2006 October 6th at 4pm
Peter Gildenhuys

I plan to discuss my research in evolutionary theory, or to use a better name, selection theory. My dissertation project is a front-to-back explication of the theory as a set of rules that must be followed by those who would deploy the theory. I discuss three sorts of rules: the entrance rule for selection theory is a rule stating to what sorts of things the theory may be applied; the assessment rules of the theory state how to fix values for the variables of the theory; the exit rules are conditional or biconditional statements—the models of population genetics and Brandon’s Principle of Natural Selection are examples—that allow the theorist to draw causal/explanatory inferences about the relative frequency of rival population members. The first two sorts of rules are stated by way of definitions, while the third are explicit rules of inference that clearly take the form of sentences.

I hope my presentation will devolve into a conversation. I will sketch the interesting and novel things I have to say about each of the aspects of selection theory just outlined, and then I will take questions from the audience. Audience members can direct me to discuss in more detail the aspects of my project they find interesting. I am most comfortable talking about the entrance rule and assessment rules; I have the least to say about the exit rules of the theory.



2006 March 17th at 5pm
Feisal Ben Hassel

The topic of this WIP is about ‘mirror neurons’ (MN) and ‘action understanding’ in primates. One goal will be to clarify the notion of action understanding, instead of rejecting it, since the terminology of understanding is actually adopted by both primatologists and neurophysiogists. My main thesis is the following: The source of data on mirror neurons in monkeys is due to single cell recording. In order to understand better the functional role of mirror neurons (MN), one should not only look at the output of these neurons (if this could be even useful), but one should look at experiments involving or that would involve the whole organism and not only one single cell. I will rely on two kinds of studies : neuro-electrophysiology and primatology (animal psychology under certain conditions, namely to converge with data provided by ethology).

Advertisements
%d bloggers like this: