Identifying Field-of-Origin Bias

Posted March 3, 2015 by Emily Dolson in Review / 0 Comments

The other day, I was reading a paper (paywall) on using graph and network theory to quantify properties of ecological landscapes, by Rayfield et al. It is a review summarizing:

  1. what properties of landscape networks we might want to measure,
  2. structural levels within networks that we might want to measure these properties at (e.g. node, neighborhood, connected component, etc),
  3. and metrics that can be used to measure a given property at a given structural level.

The authors found that there was dramatic variation in the number of metrics available in these different categories.

I was particularly struck by this comment, offering a potential explanation for the complete lack of component-level route redundancy metrics:

“This omission could be attributed to, first, the importation of measures from other disciplines that prioritized network efficiency over network redundancy…”

This makes a lot of sense to me. Route redundancy is the extent to which alternative paths exist between nodes. In computer science (and other fields that make heavy use of network theory), we care a lot about making guarantees and finding optimal solutions. Having alternative routes seems kind of messy in comparison (although it might be practically useful).  But for biological organisms dispersing through a landscape, having a variety of potential paths is really important!

Since I do a lot of importing methods from other fields*, of course this made me wonder: is this a common problem? Many important innovations  come from the movement of methods between fields. But the people in each of these fields often have different focuses, which may not always be aligned. Is there a tendency for methods imported from one field to another to retain some bias towards the goals of researchers in the original field?

How can we even figure out if this is happening? Identifying methodological gaps in a field is challenging, since humans have a tendency to get stuck in the “we’ve always done it this way” mindset. People know that a certain set of quantitative methods exist for solving a certain problem, so they are more likely to design experiments using those methods. “Unknown unknowns”, things that we don’t know that we don’t know, are notoriously hard to recognize.

Which means that identifying potential sources of unknown unknowns can be valuable. It can be a jumping off point for important and novel lines of inquiry. So how about it? Can anyone think of more examples of this sort of thing happening? I haven’t managed to yet, but I still suspect that it’s worth keeping an eye out for. At the very least, this underscores the importance of close interdisciplinary collaborations and systematic reviews like Rayfield et. al.’s.

*I’m also trying pretty hard right now to resist totally nerding out and applying these techniques to analyzing the connectivity between different academic fields.

Emily Dolson

I'm a doctoral student in the Ofria Lab at Michigan State University, the BEACON Center for Evolution in Action, and the departments of Computer Science and Ecology, Evolutionary Biology, & Behavior. My interests include studying eco-evolutionary dynamics via digital evolution and using evolutionary computation techniques to interpret time series data. I also have a cross-cutting interest in diversity in both biological and computational systems. In my spare time, I enjoy playing board games and the tin whistle.

More Posts - Website - Twitter

Leave a Reply