Archive for the ‘Scientific methodology’ Category


Randomization and its constraints: A critical look at the current reserach practices in social psychology

August 29, 2014

Taku Iwatsuki

Abstract: In this talk, I investigate the importance of randomization in the context of social psychological research. I compare the costs and the benefits of randomization, and argue that the current research practices in social psychology seem to put too much emphasis on the use of randomization. The talk will be structured as follows: First, I briefly explain what randomization is and what role it typically plays in social psychological research. Next, I explore its methodological benefits in the context of causal inference through critical examination of arguments for randomization. Then, I investigate what the costs of randomization are, focusing on the constraints it imposes on other aspects of research design and eventually on psychological theorizing based on such research. Finally, comparing these costs and benefits, I conclude that the use of more diverse research designs with less emphasis on the use of randomization seems to be necessary for developing better psychological theories.



DAY-O-WIPs 3.0

June 16, 2014

“Scales of Motion, Atmospheric Dynamics and Clouds” Marina Baldissera Pacchetti

“William Henry Bragg and the Nature of X-Rays” Haixin Dang


Realism, Instrumentalism, and Uses of Models in Science

January 13, 2014

Yoichi Ishida

Abstract: This paper argues in support of Howard Stein’s idea that in successful scientific research, a scientist uses a model according to the methodological principles of realism and instrumentalism despite the tension that they create among the scientist’s uses of the model over time. After giving precise formulations of the realist and instrumentalist methodological principles, I argue for my thesis through a detailed analysis of successful scientific research done by Seymour Benzer in the 1950s and 60s. I then argue that epistemic realism or epistemic instrumentalism—forms of realism and instrumentalism familiar in the philosophical literature—by itself prohibits a scientist from adopting both the realist and instrumentalist methodological principles. Stein’s conjecture thus poses new challenges to realists and instrumentalists, and I briefly suggest possible avenues of response that realists and instrumentalists may take.

The Structure of the Scientific Realism Debate

October 18, 2013

Aaron Novick

In this paper/talk, I will try to use the structure of inference to the best explanation (IBE, Lipton 2004) to understand the structure of the epistemic scientific realism debate in what I believe is a novel fashion. To say inference to the best explanation is reliable splits into two claims: one about the reliability of the inference form, and one about the non-formal constraints that must be met for an IBE to be successful (i.e. are these constraints met in scientific practice). Anti-realists may be variously understood as attacking one or the other of these claims, and on this basis we can see the realist task as having two parts, corresponding to the defense of each claim. Using this structure, I will explore the prospects for constructing a realist defense of the second claim, with pessimistic results. This motivates an agnosticism about epistemic scientific realism that may better allow us to appreciate the methodological attitudes of working scientists (and others).


Surface Tensions: Challenges to Philosophy of Science from Nanoscience

January 31, 2013

Julia Bursten

Abstract: A traditional view of the structure of scientific theories, on which philosophers of science have based their accounts of explanation, modeling, and inter-theory relations, holds that scientific theories are composed of universal natural laws coupled with initial and boundary conditions. In this picture, universal laws play the most significant role in scientific reasoning. Initial and boundary conditions are rarely differentiated and their role in reasoning is largely overlooked. In this talk, I use the problem of modeling surfaces in nanoscience to show why this dismissal is deeply problematic both for philosophers of science and for scientists themselves.

In macroscopic-scale modeling, surfaces are treated as boundaries in the mathematical sense-that is, as infinitesimally thin borders of a system that confine its interior. As such, surface structure and behavior is usually modeled in an idealized manner that ignores most of the physics and chemistry occurring there. At the nanoscale, however, the structure and behavior of these surfaces significantly constrains the structure and behavior of the interior in more complex ways. Three important conclusions emerge:

1. The very concept surface changes as a function of scale, and other central concepts in nanoscience also behave in this scale-dependent manner.
2. The traditional view of theory described above does not adequately capture the nature of nanomaterials modeling, which requires attention to multiple models constructed at different characteristic scales. These component models do not comport well with a single set of universal laws, as the standard view suggests. Instead, boundary behaviors become crucial and models are designed to capture these behaviors.
3. The projects of nanomaterials modeling and synthesis dictate that divisions between boundaries and interiors must be continually adjusted. Overlooking this problem has led to failures of experimental design and interpretation of data.