Workshop 4: Population Models in the 21st Century
Animal movement patterns have long fascinated mathematicians and ecologists alike. One type of primarily mathematical investigation focuses on pattern formation. How do individual behavioural decision rules translate into macroscale patterns of space use? Here mechanistic models, using random walks, stochastic processes and partial differential equations have connected pattern to process. Another type of primarily ecological investigation correlates space use patterns to underlying environmental features. Here statistical models, based on resource selection have connected patterns to environmental features. In this talk I will build a bridge between mechanism and resource selection using the concept of coupled step selection functions. The approach is based on a mechanistic underpinning for the movement process, but is also amenable to easy statistical inference regarding space use. I will demonstrate how each type of model can be connected to detailed movement data to give new insight about animal behaviour. Applications will be made to a spectrum of different animals ranging from Amazonian birds to caribou to coyotes.
Control of epizootics require that decisions be made in the face of multiple sources of uncertainty: economic, political and logistical uncertainty, dynamical uncertainty about epizootiological processes, and stochastic nature of disease spread. Decision-makers are faced with fundamental trade-off between the learning that will accrue through continued observation of a disease process and the opportunity cost of inaction. Structured decision-making and adaptive management seek to minimize the opportunity cost of inaction by defining an iterative, state-dependent policy for selecting among alternative management actions. In particular, we seek to define an adaptive policy that responds to the changing state of information about competing dynamical models as defined in the posterior distribution and the chaining epizootiological state as defined by the size and spatial extent of an outbreak. We achieve the former through an analysis of the value of information across competing models and sequential analysis of real-time outbreak surveillance from the 2001 foot-and-mouth outbreak in the UK. We achieve the latter by using reinforcement learning to solve for an optimal state-dependent policy for the application of vaccination and culling for a spatially explicit livestock outbreak. We show that adaptive policies can result in significant gains over conventional static management.
Infectious disease outbreaks recapitulate biology, emerging from the multi-level interaction of hosts, pathogens, and their shared environment. Therefore, predicting when and where diseases will spread requires a complex systems approach to modeling. However, it remains to be demonstrated that such complex systems are fundamentally predictable. To investigate this question, we study the intrinsic predicability of a diverse set of diseases. Instead of relying on methods which require an assumed knowledge of the data generating model, we utilize permutation entropy as a model independent metric of predicability. By studying the permutation entropy of a large collection of historical outbreaks--including, chlamydia, gonorrhea, hepatitis A, influenza, dengue, measles, polio, whooping cough, Ebola, and Zika--we identify a fundamental horizon for outbreak forecasts. Specifically, most diseases appear to be unpredictable beyond narrow time-horizons, thus highlighting the importance of dynamic modeling approaches to prediction. Our results have clear implications for the emerging field of disease forecasting and highlight the need for broader studies on the predictability of complex systems.
A fundamental property of mathematical models in ecology and epidemiology is sensitivity of model outcomes to the precise equations used. Indeed, the â€˜exactâ€™ mathematical formulation of model functions is often unknown; however the use of slightly different functions fitting well the same dataset may give significantly different predictions. In this case, the model is said to be â€˜structurally sensitiveâ€™ and its implementation may be grossly misleading. Even for a purely deterministic model the uncertainty in model functions (e.g. uncertainty in formulation of growth rates, functional responses, mortality terms, etc) carries through the uncertainty of model predictions and thus it can be a serious obstacle in ecological modelling, especially when making a decision in ecological management based on model prediction. In this talk, I will firstly discuss how the uncertainty in predictions using biological models with structural sensitivity can be quantified and estimated. In the second part of the talk, I will revisit the fundamental question of how empirical data (including model-guided data collection process) should be implemented for enhancing predictability of ecological models with structural sensitivity.
Juan B. Gutierrez
The advent of high-throughput molecular technologies, and the flood of information they have produced, has forced the biomathematical community to rethink how to conceive, build, and validate mathematical models. In this talk I will demonstrate how the integration of molecular and cellular models shape geographic considerations in the mathematical modeling of malaria. The usefulness of models under this light takes on new meanings, and this broad scope requires the cooperation of scientists coming from very different intellectual traditions. This talk will also explain how an adaptive learning system named ALICE (Adaptive Learning for Interdisciplinary Collaborative Environments) is used to train scientists that approach biomathematics from multiple disciplines.
I shall discuss the utility of mechanistic mathematical models as aids in the design and development of experiments. The impact of model parameters on model outputs can be assessed using techniques from uncertainty quantification. Thus one can determine those parameters for which additional knowledge would best improve the predictive ability of a model. Furthermore, one can gain understanding of what data is needed, and how much and when it should be collected in order to best achieve this aim. I shall illustrate these ideas using some examples from infectious disease projects on which I have worked, including some in the area of mosquito-borne diseases.
Connecting models with data: identifiability and parameter estimation of multiple transmission pathwaysMarisa Eisenberg
Connecting dynamic models with data to yield predictive results often requires a variety of parameter estimation, identifiability, and uncertainty quantification techniques. These approaches can help to determine what is possible to estimate from a given model and data set, and help guide new data collection. In this talk, we will discuss approaches to both structural and practical identifiability analysis. Using a range of examples from cholera and the West Africa Ebola epidemic, we illustrate some of the potential difficulties in estimating the relative contributions of different transmission pathways, and show how alternative data collection may help resolve unidentifiability. We also illustrate how even in the presence of large uncertainties in the data and model parameters, it may still be possible to successfully forecast disease dynamics.
Internet-based disease surveillance is a tool providing early warning about infectious disease outbreaks. There are variations, but the common idea is to automatically monitor Internet sources (news, blogs, etc.), searching for articles containing keywords related to infectious diseases. Natural language processing is then used to pinpoint the location being mentioned, eliminate duplicates, etc. Some systems additionally have human input to weed out false positives. In all instances, though, these systems produce a large amount of alerts.
I will discuss ongoing work using stochastic metapopulation models for the global spread of infectious pathogens along the global air transportation network. I will show in particular how such models can be used to help filter the large number of alerts generated by Internet-trawling surveillance systems.
Sara Del Valle
Disease spread is major health concern around the world and it is compounded by the increasing globalization of our society. As such, epidemiological modeling approaches need to account for rapid changes in human behavior and community perceptions. Social media has recently played a crucial role in informing and changing people's response to the spread of infectious diseases. I will describe a modeling framework that simulates the movements, activities, and social interactions of millions of individuals, and the dynamics of infectious diseases. The simulation allows for agents' behaviors to be influenced by social media (i.e., Twitter) as well as by their neighbors. This feedback loop allows us to inject emergent attitudes in response to epidemics and quantify their impact. In addition, I will describe how Internet data streams are informing models to better forecast disease spread.
Recent epidemics of pathogens such as H1N1 influenza virus, MERS coronavirus, chikungunya virus, Ebola virus, and Zika virus, highlight the importance of epidemics on local and global scales. Modeling has long been used as a conceptual tool to describe epidemic dynamics and assess possible interventions, yet the direct use of modeling in the public decision making process remains limited. To help close this gap it is essential to build links between the research and decision-making communities to: ensure that modeling targets match specific public health needs, facilitate the sharing of data and knowledge about that data, establish standards for assessing and communicating model skill, identify ways to effectively communicate predictions and especially uncertainties, and develop systems for operationalizing models for repeated use. Efforts to forecast seasonal dengue and influenza outbreaks highlight opportunities to evaluate forecasting models in the context of specific public health needs and advance both the science of infectious disease forecasting and the integration of forecasting into decision-making processes.
The study of human mobility is both of fundamental importance and of great potential value. For example, it can be leveraged to facilitate efficient city planning and improve prevention strategies when faced with epidemics. The wealth of rich sources of data --- including banknote flows, mobile phone records, and transportation data --- has led to an explosion of attempts to characterize modern human mobility. Unfortunately, the dearth of comparable historical data makes it much more difficult to study human mobility patterns from the past. In this talk, I present an analysis of long-term human migration, which is important for processes such as urbanization and the spread of ideas. I demonstrate that the data record from Korean family books (called "jokbo") can be used to estimate migration patterns via marriages from the past 750 years. I apply two generative models of long-term human mobility to quantify the relevance of geographical information to human marriage records in the data, and I illustrate that the wide variety in the geographical distributions of the clans poses interesting challenges for the direct application of these models. Using the different geographical distributions of clans, I quantify the ergodicity of clans in terms of how widely and uniformly they have spread across Korea, and I compare these results to those obtained using surname data from the Czech Republic. To examine population flow in more detail, I also construct and examine a population-flow network between regions. Based on the correlation between ergodicity and migration in Korea, I identify two different types of migration patterns: diffusive and convective. I expect the analysis of diffusive versus convective effects in population flows to be widely applicable to the study of mobility and migration patterns across different cultures.
State space models work on two different layers of noise: a noise infused process model and additional measurement noise. A noise infused process model may track the annual population size of salmon, where the noise in this layer may be used to account for un-modelable environmental fluctuations or random perturbations to migratory routes. Subsequently, the population size is observed via noisy measurements, where this may be due to challenges in accurately counting the size of the population of salmon. As a result, estimating parameters through these two layers of noise requires dealing with considerable uncertainty. The widely adopted Integrated Nested Laplace Approximation (INLA) is designed to approximately integrate out some parts of the model, accelerating and simplifying the process of estimating parameters. The INLA approximation lies in the assumption that performing the integral is equivalent to integrating a Gaussian. The alternative to using INLA, and also checking validity of the INLA assumption, typically requires high dimensional and slow but very accurate Monte Carlo integration. This forces the practitioner to chose between the extremes of quick and rough or slow and precise. In this work we devise an INLA diagnostic /alternative model integration approach allowing the user to decide where to stand in a continuous variant of the previously binary speed vs accuracy tradeoff. Additionally, the proposed approach outputs a measure of confidence in the applied approximate integral. The method is based on probabilistic numerics, a new area of research bringing together numerical analysis, applied mathematics, statistics, and computer science. This is joint work with Charlie Zhou (Simon Fraser University) and Oksana Chkrebtii (the Ohio State University).