h1

“Physical Theories are Prescriptions, Not Descriptions” (1/26/18)

January 24, 2018

Shahin Kaveh

Abstract: I plan to use this WIP to figure out the content and structure of the first chapter of my dissertation and solicit ideas for objections to my view. Traditionally, scientific theories are taken as collections of propositions or models that describe the ontology of nature. This is the descriptive-ontological view, which takes the theory to be universal and complete. I will show how long-standing problems in philosophy of science, such as Hempel’s Theoretician’s Dilemma and Sneed’s Problem of Theoretical Terms stem from taking theories as universal and complete, as per the descriptive-ontological view. I will argue that there are no universal and complete theories, and propose an alternative: the prescriptive-dynamical view, in which a theory consists of state assignments and transition rules between those states. My view takes the universal part of theories as prescriptions or recipes for assigning dynamical states. The transitions between the states are then completed with certain “local empirical mediating principles” that are context-dependent and not universal. I would like to discuss the extent to which my view solves these problems and what other problems it may cause.

h1

“Malebranche, Monsters, and the Maternal Imagination” (1/24/18)

January 24, 2018

Katie Creel

Abstract: As a philosopher and theologian whose work is responsive to the
scientific developments of the time, Nicolas Malebranche’s use of
monsters as evidence in “De la recherche de la vérité” (The Search
After Truth) is not surprising. “Monsters,” or beings with physical
deformities, were of great interest to natural philosophers in the
17th century. For microscopists studying the mechanisms of generation,
monsters provided “natural experiments,” allowing them to examine the
limits of biological possibility. And for philosophers, monsters
provided evidence for or against the extant accounts of generation.

For Malebranche, monsters present a two part problem. First, there is
a tension in between his account of preformation by an omnipotent God
and the very existence of monsters. If God individually creates all
living beings, fully formed with infinite parts, at the beginning of
time, how does Malebranche explain why living being are not preformed
in such a way that they could never develop into monsters? Or why
does Malebranche not follow Arnauld in saying that God intended the
exact final shapes of all beings, and thus there are no true monsters?
One way to solve this puzzle would be to claim that God’s omnipotence
as applying to particular events would be in conflict with
Malebranche’s views the possibility of understanding God’s actions,
but this does not seem to accord with Malebranche’s thoughts on
method.

Second, for human beings specifically, Malebranche seems to argue that
monsters are caused by the maternal imagination’s influence on
development. This seems to conflict with his account of the creation
of animal or plant monsters. However, both puzzles can be resolved
with a close reading of Malebranche’s discussion of human development
before the Fall: human sin is what leads to human monsters.

h1

“Reconciling Phylogenetic Systems and HGT: Perils and Prospects” (12/8/17)

January 24, 2018

Phillip Honenberger

Abstract: Phylogenetic systematics, in the sense articulated by Hennig (1966) and since defended and developed by many others (e.g. Wiley 1980, Wiley and Leiberman 2011), proposes to classify organisms on the basis of nested separations of lineages in the course of evolutionary history. Yet the increasingly appreciated frequency of horizontal gene transfer – that is, cases wherein an organism receives genetic material from the environment or other organisms by non-reproductive processes, such as viral insertion – plausibly challenges key features of the phylogenetic perspective, as Doolittle (1999, 2010) and others have argued. These challenges raise the question of whether and to what extent phylogenetic systematics is an appropriate approach to classifying organisms, species, and lineages wherein HGT has played a role. My aim in this talk is twofold. First, I seek to articulate the tension between HGT and phylogenetic systematics as clearly as possible. Second, I describe and evaluate the most promising strategies for resolution, based on existing literature. These strategies include denying the applicability of phylogenetic systematics to introgressed lineages; founding phylogenetic systematics on “trees of cells” rather than “trees of characters” or “trees of genes”; and incorporating “speciation by hybridization” into phylogenetic models. These strategies of resolution are not entirely unfeasible. However, each has significantly troubling implications for long-standing amibitions of phylogenetic systematics, such as to provide a universal and non-ambiguous hierarchical classification of organic life (Hennig 1966). If time permits, I’ll offer a few remarks about what I take this tension and the limited prospects for its resolution to mean for biological classification and the reconstruction of evolutionary history more generally.

h1

“What is a Process” (11/3/17)

January 24, 2018

William Penn

Abstract:  In order to understand how process realism can resolve the inconsistency between the liquid drop and shell models, one must first understand how process realism operates.  As an ontic primitive for the process realist, “process” cannot be given an explicit definition, just as it is impossible to explicitly define words like “entity” and “structure” within the context of their respective realisms.  The meaning of the concept of process, and the relations between this concept and concepts of structure, state, and event, must therefore be given only heuristically.  Despite this challenge, a suitable understanding of processes can be built from a combination of paradigm cases and applications of the concept to simple examples. In this [talk], I build a general heuristic for understanding the concept of process out of a discussion of motions and transitions. I then apply this heuristic to the Bohr model of the atom in order to clarify the roles that the concept of process plays in explanations and model building.

h1

“Transparency in Computational Processes” (10/20/17)

January 24, 2018

Katie Creel

Abstract​: Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts.  Some philosophers have suggested treating opaque systems instrumentally, but the computer scientists developing new strategies for increasing transparency are right not to find this satisfying.  Instead, I propose an analysis of transparency as having three forms: transparency of the algorithm, the way that algorithm is written in code, and the way that code is run on particular hardware and data. This allows us to target the kind of transparency most useful for a given task.

h1

“Causes, cycles, equilibria” (9/15/17)

January 24, 2018

Tomek Wysocki

Abstract: Superman’s laser rays meet Faora’s halfway. If he closes his eyes, the rays from her eyes will hit him, and Superman will be no more. The same holds for her. Thus, Superman’s beaming causes Faora’s beaming, but her beaming causes his as well, and the spectacle goes on forever.

The fight of these two constitutes a counterexample to Hitchcock’s account of causation (2001), and my aim is to show how. First, I describe his account; on this account, causal claims can be determined from acyclic graphs describing a situation under consideration. Then, I show how the case above poses problems for the account. Finally, I present a macroeconomic model that is more naturally analyzed with a cyclic graph and propose how to deal with causal claims in situations described by such graphs. Although I focus on Hitchcock’s account, my argument targets any structural equations framework that doesn’t allow cycles.

h1

“A Fork in the Middle Path: Explaining the Success of Mature Physical Theories” (5/3/17)

May 24, 2017

Shahin Kaveh

Abstract: The problem of explaining the success of science has been debated for a long time. This debate has led not only to different answers to this question, but also to different interpretations of the question itself, each of which calls for a different type of answer. Recently, Stanford has summarized this debate and identified a “Middle Path” when it comes to explaining success, characterized by the commitment that the success of the successful theory must be explained through a systematic relationship between the theory’s constructs and the inner workings of the system, whatever this relationship may be (the Maddy-Wilson principle). However, a survey of the views on the offer shows that only two types of Middle Path explanations have been offered: ones that draw on Truthlikeness, and ones that give up on fleshing out this systematic relationship in any general terms at all. I aim to offer a third way within the Middle Path, in which one can satisfy the Maddy-Wilson principle without drawing on Truthlikeness.

h1

“A Pluralist View of Biological Individuality” (4/28/17)

May 24, 2017

Haixin Dang

Abstract: In this paper, I focus on one family of views: the so-called monist accounts of biological individuality, which have been most prominently defended by Ellen Clarke (2013) and Peter Godfrey-Smith (2009).  These accounts of biological individuality are monist because they assume that there ought to be one unified concept of the biological individual: there is only one correct way to pick out the fundamental units of the living.  I argue that the monist view is problematic and instead defend a pluralist view of biological individuality.  I argue, first of all, that within the purportedly monist account, the kinds of mechanisms/criteria defended by Clarke and Godfrey-Smith in fact picks out more than the narrow class of entities they believe they have identified.  I then defend a pluralist view by showing that, once we think beyond the scope of population genetics, we will find that there exist many different kinds of biological individuals. 

h1

“Technique-Driven Research: Clarifying the nature of exploratory experimentation” (4/6/17)

May 24, 2017

David Colaco

Abstract: Recently, historians and philosophers of science have sought to account for experimental research that is not driven by the evaluation of theory, but instead is motivated by the desire to explore.  Along these lines, I discuss how techniques can drive research and allow scientists to explore systems.  I describe the preparation technique CLARITY, which drives cutting-edge microscopy research in neuroscience.  Though technique-driven research is exploratory and experimental, it fails to satisfy the conditions of several popular accounts of exploratory experimentation.  In light of this discrepancy, I critically assess these accounts, and reappraise what conditions are necessary for exploratory research.

h1

“Incompressible Patterns: CRISPR vs Dennett” (3/30/17)

May 24, 2017

Katie Creel

Abstract: Dennett’s classic paper defines “Real Patterns” as present in data if
“there is a description of the data that is more efficient than the
bit map, whether or not anyone can concoct it.” (Dennett 1991, 34)
However, compressibility is not the right criterion for pattern
realism. A better pattern ontology is one based on informational
relationships between the pattern and the perceiver of the pattern,
whether human, biological, or machine.

A simple compression algorithm such as Huffman coding should be
perfect for Dennett’s purposes. It can compress text into an efficient
lossless binary tree in which letters are assigned unique codes based
on their frequency. Instead, Huffman coding illustrates the problem
with compression as a metric: lack of generalizability. If the
algorithm were only used once, it could compress a novel into one
character: “W” for all of War and Peace. But this would be no
informational savings. What makes compression work is that the “cost”
of the compression algorithm is amortized over many uses.

Further, any discrete chunk of randomness can be recognized as a
pattern if it has the right informational relationship with its
recognizer. Such recognition relationships between random sequences
and detectors occur in genetic material. New tools for genetic
manipulation such as CRISPR use a recognition relationship with
sequences of base pairs to snip and replace precise segments of DNA.
Using case studies, I suggest that we should think of patterns as
representing the informational relationship between pattern and
recognizer.