A collective blind spot in measuring natural systems?
A few months ago I got a Fitbit, which for those of you who haven’t heard of it is basically a step counter. I’d been thinking about getting one for a while to help me motivate my exercise and keep my work-life balance somewhat on track. Perhaps symptomatic of not managing the balance, it took me awhile to get around to deciding what to get and actually buying it. Luckily for me, in the mean time, my husband bought one as a present and now I get to obsess about how many steps I take in a day.
The first thing I have learned is that, indeed, 10,000 steps are a lot. Although there is some debate whether 10,000 is really the goal we should all be aiming for, I had sort of assumed that I was a reasonably active person* therefore would be meeting a daily recommendation. Turns out that since I bike as a part of my commute, actually 10,000 steps doesn’t come easy for me. Even on the days when I add in a 30 min jog, I need to do more walking than normal to reach 10,000. The results of this little experiment were sobering and made me re-evaluate my assumptions about my daily activity.
So what does this have to do with ecology? Well, we all have our pet theories and assumptions about how things work. Making up stories (or as we more formally call them: hypotheses) is an incredibly fun and important part of ecology. However, we’ve all had our pet theories and robust hypotheses torn down by actual data. It is a foundation of what we ecologists do. So it isn’t surprising that the act of collecting data can change our perceptions about how things work. We’re pretty comfortable about needing actual data to determine patterns and experiments to tease apart the effects.
But there is something else I’ve noticed about having the Fitbit on. The very fact that I am counting my steps is subtly changing my behaviour. I notice myself thinking more about whether to walk up another set of stairs** or deciding to walk versus bike to our corner store. Of course, this is in part why I wanted to get a step counter in the first place but it has got me thinking about this related issue in ecology.
I think it is safe to say we all agree we need to collect data to test our theories, but we discuss much less often how the act of collecting the data could perturb the system we’re interested in. What if our measuring something alters its behaviour? What does that mean for our science? Of course experimental designs often account for this by controlling as much as possible for our treatments. It is basically why we compare treatments to controls rather than just altering something to see what happens. But the fact remains that our measurements can change the systems we’re studying and we can’t always control for that, especially if taking the measurement itself alters future outcomes. It seems that studies where we measure things multiple times are particularly vulnerable, but also fairly common in ecology (especially whenever you’re interested in fitness).
I’m of course not the first to think of these issues, if I knew more about science history I’m sure I could share lots about the development of the scientific method but alas I do not***. It is at least my impression that animal studies are more explicit about these kinds of effects but as we learn more about plant behaviour (yes, plants definitely behave), there is evidence that we need to think about plant responses to measurements too. For example, studies have looked at the effects of touch on plants (check out this sobering study: pdf) but I rarely see any mention in method sections about how experimenters’ influences were controlled for****. That said, I’m sure that the title of this post is a little strong. I’m guessing that people often think about these issues but just rarely talk about them directly. This could lead to it seeming that studies have not accounted for things that they actually have. However, silence can make it more tough for researchers starting out in a field or new system.
I’m not sure there is much to do about this problem. If me taking flower measurements changes the way the plant behaves, I have to take comfort in the fact that I measure flowers on all the plants in my dataset. The effects I introduce should be uniform and are likely to be small (bees are handling those flowers all the time for example). It is more worrying when I think about things like a hand pollination treatment where handling time for those plants is greater than the ones I leave to be naturally pollinated. I think it is important to keep aware of potential problems in our measurements and address them to the best we can. There are potential dangers in ignoring how our interference effects the patterns we see but we shouldn’t let the possibility cripple our science either. Clearly the benefits of taking measurements far outweigh any biases they introduce and we design experiments for a reason. Imagine where ecology would be if no one took measurements. Good studies tackle questions from multiple angles so that the overall picture isn’t dependent on one test and collectively we are building a picture of how the world works together. So generally I’m not concerned about our field but it is something to think about the next time you take a measurement, whether it is invasive or not.
*I mean I even have a standing desk…but clearly one needs to take it to the next level and go for a walking one :)
**Even numbers somehow seem nicer and the fitbit counts the flights of stairs I’ve gone up. There is a reason I don’t look at my digital calliper screen when measuring floral traits. That way I know when I get a measurement ending in .00 it isn’t my doing.
***Unfortunately science history is too much of a sidetrack for me to pursue right now but maybe we have some knowledgeable commenters?
****Full disclosure, I rarely discussed potential measurement effects either. It seems we tend towards thinking about these kinds of things when the results are unexpected. I’m trying to be more conscious in my experimental designs and control for things that I can but it could take years of study to know exactly how measurements affect a study system. In the end, I hope that I can reduce the noise around any differences and convince myself that the data are telling me something real about my system.