I haven’t posted in a while… I could have know that I’d have trouble remembering to post over the long term.  A reminder that science communication can be hard — especially when you have no readership!

However, I am announcing a post doc fellowship (almost certainly administered by NRC) with Dr. Liz Brooks at Northeast Fisheries Science Center and Dr. Rick Methot, chief scientist for stock assessment at NMFS.  The candidate would conduct research regarding environmental effects on recruitment (production of juveniles) for marine fishes — recent research shows that recruitment is likely to be autocorrelated, and also potentially synchronous.  In these cases, population forecasts must account for autocorrelation when predicting the time required for rebuilding a stock.  The candidate could also collaborate with us on other projects, including estimating the link between recruitment and temperature using globally available data, and improved methods for including environmental data.  Finally, the candidate would travel with Liz, myself, and other assessment scientists to an ICES meeting in Copenhager in June, regarding environmental effects on recruitment.

Please see the attached for more details, and write myself and Liz Brooks (Liz.Brooks _at_ noaa.gov) as a preliminary inquiry.

Postdoctoral Research Fellowship regarding forecasting recruitment


As you can tell from the title, this is a post about how selectivity is interesting.  Its essentially a response to an objection I often hear from ecologists, who sometimes comment that stock assessment is not engaging.  It is admittedly preliminary, besides being one sided and poorly researched (but you knew that already didn’t you?)

The background is as follows.  Stock assessment models approximate the demographics and dynamics of fish (and shellfish) populations.  They often use age-structured demographics, which are well-accepted in ecology too (ever heard of the Euler-Lotka equation?).  However, age-structure means that the modeler must, explicitly or implicitly, declare whether fishing occurs with equal or unequal intensity for different ages.   How fishing intensity varies over ages (and perhaps sizes) is often called selectivity

Now, selectivity is complicated because it includes many different processes.  Fishing gear (and targeting behaviors) will select for certain sizes/ages of fish, and this can be measured experimentally and results can be included in a model.  However, the shape of selectivity also depends upon how fishing is allocated spatially (as illustrated by Dave Sampson, among others, here).  Essentially, if fishing occurs in two areas, aggregate selectivity is the weighted average of selectivity in each area (weighted by the proportion of the population in each area).  If fishing intensity is greater in one area than another, aggregate selectivity will more resemble the area with lower fishing for old fish (because the survivors are generally in the area with little fishing).  Hence, spatial allocation of fishing can cause some wacky shapes for selectivity, which doesn’t resemble selectivity in either area individually.

For this latter reason, selectivity is generally estimated within a stock assessment model, rather than being informed by prior information (like gear experiments).  It may therefore seem to be a “nuisance parameter” — some effect that must be estimated, but has little direct interest to any particular researcher.  And its notoriously hard to argue from first principles about what selectivity should look like for a given species and fishery.

Despite this, I think its wrong to suppose that selectivity is an uninteresting “nuisance” parameter.  Instead, I propose that these issues with selectivity will also occur for other age-structured effects, i.e., size-based predation.  We already know that fishes are gape-limited (see here for a recent example), and will generally predate individuals within a certain size range, which shifts as an individual grows.  So the intensity of predation follows a function that depends upon the age of the prey.  Given different predator densities in different areas, aggregate predation may not resemble the age-based function for predation in either area individually.  In this case, experimental measures of how predation varies among prey ages will not be appropriate to represent aggregate predation.

I don’t claim to have a good and general answer for how to account for this effect when modeling age-specific predation.  However, I think that stock assessment approaches to selectivity will provide some ideas — and hope to post about the topic more after I’ve had time to ruminate…

Hope everyone’s had a good holiday!  If your family is like mine, you’ve had a few spare moments to ponder problems in ecology over the past few days.

And, to be honest, I’m still stuck on my last post: the relationship between theory and empirical work in ecology.  I remember a post on Dynamic Ecology responding to comments of Ben Bolker’s (here and including comments).  That post pointed out that physicists have a long history of positive feedbacks between theorists and empiricists, where theorists guide and shape experimental designs, and empiricists find phenomena that aren’t explained by current theory, thus guiding future theoretical development.

So the question is: how to we get this working in ecology!  Rather than making generalizations about which departments do and do not succeed in doing this (I’m not even qualified to say!), I’d rather muse on examples that seem to do well:

1.  It appears to me that geneticists do fairly well in general.  Of course they benefit from a strong foundation of theory, proceeding from fitness maximization.  Perhaps gene surfing would be a recent example, where genetic theory guided empirical research that has since confirmed its existence in turtles.

2.  I’m also always impressed by Mark Mangel’s longtime collaborations with empiricists.  His early work with wasps in state-dependent life history theory comes to mind (discussed here), but he also appears to encourage graduate students and post-docs to find empirical evaluations of predictions from stochastic dynamic programming (e.g., in salmons).

From these two examples (!) it seems like theorists can encourage collaborators to test their predictions, or a community can emerge for supporting or failing to support theorized relationships.  I personally hope to find theoreticians with which to collaborate at my future institution(s).  Are there other models I’m missing?

One of the things I hoped to do with my blog this time around was review some of the recent literature that marine ecologists might miss, or might not receive as they deserve.  A little tip-o’-the-hat blog style.  Here goes…

One of my favorite papers recently has been Scheiner’s (2013) Ecology Letters commentary “The ecological literature, an idea-free distribution” (researchgate PDF here).  In the paper, Scheiner surveys Ecology, Evolution, and American Naturalist to see what proportion of papers in these leading journals reference ecological literature 1920-2010.  He shows a big jump in references to theory in the 60’s (probably due to MacArthur and his collaborators), but a recent plateau or even decrease, with Ecology current at around 50%.  He obviously bemoans this state of the field, claiming that journal and grant reviewers should privilege papers that respond to theory over those that don’t.  I’m also reminded of Roughgarden (in Perspectives in Ecological Theory) quoting Darwin as saying “all observations must be for or against some view if it is to be of any service”.

So what does this mean for our field?  I see three main benefits of including theory in my research agenda:

  1. Parsimony — Ecological theory can substitute directly for ‘data’, e.g., by providing support for particular parametric functions for trophic interactions, etc. Without theory, we have less information for predictive modeling, and the parameter and model-space that we must explore when modelling is expanded.
  2. Passing on information — Tested ecological theory can also be a compact way of transmitting information from study to study.  For example, theory is more easy to incorporate by future authors than a complicated multivariate prior on a nonparametric model (which we showed could be used instead of a theory-motivated parametric model here).
  3. Guiding future research — Perhaps most important, theory can help focus attention on what’s important to study.  We have about a billion possible questions in fisheries — for example, the number of possible effects of climate change on populations is endless.  Theory can help guide thinking about which are most important, which is especially important given that we have a limited information to answer all of these questions.

I’m sure I’ll return to the topic of theory in fisheries science later.  In the meantime, I wonder how the leading non-review marine sciences journal, Canadian Journal of Fisheries and Aquatic Science, would come out on this test (anyone interested?) but doubt it would do better than Ecology.

After a long hiatus, I am returning to a blog I started during my Master’s program.  In the time since then, I (like most marine scientists) have spent a lot of time thinking about environmental effects on marine populations.  In summary, we know that marine populations vary on annual, decadal, and 100+ year cycles — tonnero fisheries for Mediterranean tuna have shown fluctuations over >1000 years!  However, we know from Ram Myers that correlations between the environment and fish demographics often break down.  What’s a marine scientist to do?

Well, there’s an emerging consensus in other branches of ecology about how to estimate environmental effects.  In an illuminating review, Frederiksen et al. (here) claim that its important to use random-effect models when testing environmental effects:

there are potential pitfalls when assessing the statistical significance and biological importance of environmental covariates (detailed review in Grosbois et al2008). Briefly, when between-year variation in a given parameter is pronounced (which is often the case in even moderately large data sets), both standard likelihood ratio tests and AIC-based model selection (Burnham & Anderson 2002) are biased. Two approaches exist to deal with this problem: analysis of deviance (Skalski, Hoffmann & Smith 1993), which provides an anova-like partitioning of the total between-year variation into a component explained by the covariate and residual variation, and mixed models with random year effects (Loison et al2002). Analysis of deviance has recently been shown to give a robust approximation to the more sophisticated approaches in the mixed model framework (Lebreton, Choquet & Gimenez 2012). Proper statistical assessment of the importance of environmental covariates is critical for achieving robust inference.

This mirrors my own thoughts: when estimating an environmental effect, you are essentially speculating that the environment causes variability in some demographic rate (e.g., larval survival in fish recruitment).  Thus, you must begin by including stochastic variability, and this can be done easily and generically by using random effects (i.e. using a state-space model).  You can then see if this stochastic variability is explained by the environmental effect.  However, including an environmental effect in a deterministic model (i.e., without already including random effects) is obviously strange — it specifies that a demographic effect varies environmentally, but that we know exactly the shape and nature of this effect.

A second branch of research supports this view.  In short, researchers have shown that model mis-specification causes problems during the interpretation of model results.  For example, a deterministic model of a population (i.e. models without random effects for some demographic process) will generally have serial autocorrelation (e.g., where residuals are consistently above or below model predictions).  And in this case, estimates will have standard errors that are too small (see discussions here and here), so statistical tests of environmental effects will pass tests of statistical significance too often.

Anyway, this discussion came up recently during questions after a quantitative talk and I welcome thoughts and discussion.