A model-experiment loop to optimise data requirements for ecotoxicological risk assessment with mesocosms
Chemical effects on ecological interactions within a model-experiment loop
Recommendation: posted 20 October 2022, validated 30 November 2022
In Ecotoxicology, the toxicity of chemicals is usually quantified for individuals under laboratory conditions, while in reality individuals interact with other individuals in populations and communities, and are exposed to conditions that vary in space and time. Micro- and mesocosm experiments are therefore used to increase the ecological realism of toxicological risk assessments. Such experiments are, however, labour-intensive, costly, and cannot, due to logistical reasons, implement all possible factors or interests (Henry et al. 2017). Moreover, as such experiments often include animals, the number of experiments performed has to be minimized to reduce animal testing as much as possible.
Modelling has therefore been suggested to complement such experiments (Beaudoin et al. 2012). Still, the population models of the species involved need to be parameterized and can thus require a large amount of data. However, how much data are actually needed is usually unclear. Lamonica et al. (2022) therefore focus on the challenge of “taking the most of experimental data and reducing the amount of experiments to perform”.
Their ultimate goal is to reduce the number of experiments to parameterize their model of a 3-species mesocosm, comprised of algae, duckweed, and water fleas, sufficiently well. For this, experiments with one, two or three species, with different cadmium concentrations and without cadmium, are performed and used to parameterize, using the Bayesian Monte Carlo Markov Chain (MCMC) method, the model. Then, different data sets omitting certain experiments are used for the same parameterization procedure to see which data sets, and hence experiments, might possibly be omitted when it comes to parameterizing a model that would be precise enough to predict the effects of a toxicant.
The authors clearly demonstrate the added value of the approach, but also discuss limits to the transferability of their recommendations. Their manuscript presents a useful and inspiring illustration of how in the future models and experiments should be combined in an integrated, iterative process. This is in line with the current “Destination Earth” initiative of the European Commission, which aims at producing “digital twins” of different environmental sectors, where the continuous mutual updating of models and monitoring designs is the key idea.
The authors make an important point when concluding that “data quality and design are more beneficial for modelling purpose than quantity. Ideally, as the use of models and big data in ecology increases […], modellers and experimenters could collaboratively and profitably elaborate model-guided experiments.”
Beaudouin R, Ginot V, Monod G (2012) Improving mesocosm data analysis through individual-based modelling of control population dynamics: a case study with mosquitofish (Gambusia holbrooki). Ecotoxicology, 21, 155–164. https://doi.org/10.1007/s10646-011-0775-1
Henry M, Becher MA, Osborne JL, Kennedy PJ, Aupinel P, Bretagnolle V, Brun F, Grimm V, Horn J, Requier F (2017) Predictive systems models can help elucidate bee declines driven by multiple combined stressors. Apidologie, 48, 328–339. https://doi.org/10.1007/s13592-016-0476-0
Lamonica D, Charles S, Clément B, Lopes C (2022) Chemical effects on ecological interactions within a model-experiment loop. bioRxiv, 2022.05.24.493191, ver. 6 peer-reviewed and recommended by Peer Community in Ecotoxicology and Environmental Chemistry. https://doi.org/10.1101/2022.05.24.493191
Volker Grimm (2022) A model-experiment loop to optimise data requirements for ecotoxicological risk assessment with mesocosms. Peer Community In Ecotoxicology and Environmental Chemistry, 100002. https://doi.org/10.24072/pci.ecotoxenvchem.100002
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article. The authors declared that they comply with the PCI rule of having no financial conflicts of interest in relation to the content of the article.
Evaluation round #1
DOI or URL of the preprint: https://doi.org/10.1101/2022.05.24.493191
Version of the preprint: 4
Author's Reply, 14 Sep 2022
Decision by Volker Grimm, posted 20 Oct 2022
This is an interesting study where the ultimate goal is to reduce the number of experiments to parameterize a 3-species mesocosm model sufficiently well. For this, experiments with one, two or three species, with different cadmium concentrations and without cadmium, are performed and used to parameterize, using the Bayesian MCMC method, the model. Then, different data sets omitting certain experiments are used for the same parameterization procedure to see which data sets, and hence experiments, might possibly be omitted when it comes to parameterize a model that would be precise enough to predict effects of a toxicant.
Both reviewers found the study interesting and scientifically sound. They still raised quite a few issues that should be addressed to improve the clarity of the presentation or provide better justifications for certain designs and assumptions.
I fully agree with the reviewers' assessments. I would like to add: an ODD model description should be complete by itself and thus not require that readers have to dig for relevant information in other papers. Here, quite a few references are made, unspecifically, to Lamonica et al. 2016a. It cannot be that much work to just copy and paste the relevant information in the current ODD.
Moreover, although I see that it can be interesting to see which kind of experiments are needed to parameterize a "full system" sufficiently well, do these insights not strongly depend on the species, experimental setup, and toxicant used? I do not fully understand how specific insights into which experiments might left out can really help us? Once you have the model fully parameterized with the full data set, there is actually no need to go back and use reduced data sets. So, for what kind of questions, or systems, can the insights gained on the relevance of specific experiments help?