Mathematical Biology Seminar - Fall 2023
Seminars are typically held on Wednesdays at 1:00PM in-person in CULM 505 or online via Zoom unless otherwise noted. Please note the location listed in the schedule below.
For questions about the seminar schedule, please contact Kristina Wicke.
September 27Matthew Junge, Baruch College, CUNY Location: CULM 505 Chase-escape Stochastic models in which parasite expansion is restricted to the sites occupied by their hosts were introduced by ecologists in the 90s. Most big theoretical questions remain unanswered. I'll describe our efforts to understand these dynamics on lattices, trees, and the configuration model. |
October 4Daniele Avitabile, Vrije Universiteit Amsterdam Location: CULM 505 Bump Attractors and Waves in Networks of Integrate-and-Fire Neurons Bump attractors are localised patterns observed in in vivo experiments of neurobiological networks. They are important for the brain's navigational system and specific memory tasks. A bump attractor is characterised by a core in which neurons fire frequently, while those away from the core do not fire. We uncover a relationship between bump attractors and travelling waves in a classical network of excitable, leaky integrate-and-fire neurons. This relationship bears strong similarities to the one between complex spatiotemporal patterns and waves at the onset of pipe turbulence. We define and study analytical properties of the voltage mapping, an operator transforming a solution's firing set into its spatiotemporal profile. This operator allows us to construct localised travelling waves with an arbitrary number of spikes at the core, and to study their linear stability. A homogeneous "laminar" state exists in the network, and it is linearly stable for all values of the principal control parameter. We show that one can construct waves with a seemingly arbitrary number of spikes at the core; the higher the number of spikes, the slower the wave, and the more its profile resembles a stationary bump. As in the fluid-dynamical analogy, such waves coexist with the homogeneous state, are unstable, and the solution branches to which they belong are disconnected from the laminar state. We provide evidence that the dynamics of the bump attractor displays echoes of the unstable waves. |
October 11Celeste Vallejo, DILIsym/Simulations Plus Location: Zoom An Introduction to Mathematical Modeling in Drug Development Using GastroPlus and DILIsym Modeling and simulation provide support for drug development at all stages of the drug lifecycle. Machine learning models can help to select the lead drug candidate during the drug discovery phase. Physiologically based pharmacokinetic (PBPK) models can help to predict the first-in-human dose during the pre-clinical phase or the appropriate drug dose in special populations during the clinical trial phase. Quantitative systems pharmacology/toxicology (QSP/QST) models can help to predict drug efficacy/toxicity during either the pre-clinical or clinical trial phase. In this talk, I will give a brief introduction to drug development as well as the types of models that are used to solve the most pressing questions in the drug development space. An introduction to PBPK models will be given and the PBPK software, GastroPlus, will be used as illustration. An introduction to QSP/QST models will be given and the QST software, DILIsym, will be used as illustration. |
October 25Victor Matveev, NJIT Location: CULM 505/Zoom Parameter Estimation: Parameter Confidence Intervals Without Accurate Sampling Typical mathematical biology modeling problems involve very high-dimensional parameter spaces, therefore enormous computational effort is required to estimate model parameters. Most parameter estimation algorithms aim to accurately and efficiently sample the parameter space according to the parameter likelihood function that quantifies the deviation of the parameter-dependent model results from empirical data. This in turn allows to compute moments of the resulting posterior parameter distribution, such as parameter averages and correlations. The most common technique, Markov Chain Monte Carlo, is extremely easy to implement, and is reasonably efficient as long as single likelihood value evaluation (corresponding to single model run) is not computationally expensive, but suffers from some correlations in the resulting sample point sequences. Various improved techniques such as Approximate Bayesian Computation and Machine Learning-assisted likelihood computation aim to reduce the number of likelihood evaluations (i.e. model runs) while preserving sampling accuracy. However, in practice the modeler is not interested in building an accurate sample of the parameter space, since one is not interested in the moments of the posterior parameter distribution (apart from parameter correlations). Rather, a modeler is usually seeking to accurate visualize and estimate individual parameter ranges consistent with experimental data at a certain confidence level, given by a certain value of the likelihood ratio. These ranges can be obtained by projecting the likelihood function onto the individual parameter axes. In this talk I will give a very brief and very basic introduction to parameter estimation, and then describe a simple and straightforward algorithm that projects the likelihood function onto each parameter axis. The advantage of the projection method is two-fold: (1) it reduces the dimensionality of the parameter space by one when computing each of the projections, and (2) it replaces the likelihood sampling problem with the likelihood optimization problem, which is more straightforward (even if not necessarily more efficient). The main drawback is that this method does not allow to calculate very accurately the moments of the posterior parameter distribution, for instance the correlations between parameters; however, the correlations between parameters can be estimated sufficiently accurately from a small number of MCMC runs, and are also preserved (in typical cases) in the likelihood function projection onto individual parameter axes. |
November 1Deren Eaton, Columbia University Location: CULM 505 Linking Phylogenetic Inference at Genome-wide and Local Genealogical Scales Genomes are composed of a mosaic of segments inherited from different ancestors, each separated by past recombination events. Consequently, genealogical relationships among multiple genomes vary spatially across different genomic regions. Genealogical variation among unlinked (uncorrelated) genomic regions is well described for either a single population (coalescent) or multiple structured populations (multispecies coalescent). However, the expected similarity among genealogies at linked regions of a genome is less well characterized. Recently, an analytical solution was derived for the distribution of the waiting distance for a change in the genealogical tree spatially across a genome for a single population with constant effective population size. Here we describe a generalization of this result, in terms of the expected distribution of waiting distances between changes in genealogical trees and topologies, for multiple structured populations with branch-specific effective population sizes (i.e., under the multispecies coalescent). Our solutions establish an expectation for genetic linkage in multispecies datasets and provide a new likelihood framework for linking demographic models with local ancestry inference across genomes. We demonstrate how this can be used to calculate the likelihood of an ARG in multi-population datasets, providing a novel framework for extracting spatial genealogical information to link demographic models with local ancestry inference across genomes. |
November 15John Nardini, The College of New Jersey Location: CULM 505 Forecasting and Predicting Stochastic Agent-based Models of Cell Migration With Biologically-informed Neural Networks Collective migration, or the coordinated movement of many individuals, is an important component of many biological processes, including wound healing, tumorigenesis, and embryo development. Spatial agent-based models (ABMs) are often used to model collective migration, but it is challenging to thoroughly study these models' behavior due to their random and computational nature. Modelers often overcome these obstacles by coarse-graining discrete ABM rules into continuous mean-field partial differential equation (PDE) models. These models are advantageous because they are fast to simulate; unfortunately, these PDE models can poorly predict ABM behavior (or even be ill-posed) at certain parameter values. In this work, we describe how biologically-informed neural networks (BINNs) can be used to learn BINN-guided PDE models that are capable of accurately predicting ABM behavior. In particular, we show that BINN-guided PDE simulations can forecast future ABM data not seen during model training. Additionally, we demonstrate how to predict ABM data at previously-unexplored parameter values by combining BINN-guided PDE simulations with multivariate interpolation. We highlight these results using three separate ABMs that consist of rules on agent pulling and/or adhesion. Surprisingly, BINN-guided PDEs can accurately forecast and predict ABM data with a one-compartment PDE when the mean-field PDE is ill-posed or requires two compartments. While we focus our presentation on the biological applications, this work is broadly applicable to studying many systems that exhibit the collective migration of individuals. |
November 29Jonathan Jaquette, New Jersey Institute of Technology Location: CULM 505 Reliability and Robustness of Oscillations in Some Slow-Fast Chaotic Systems A variety of nonlinear models of biological systems generate complex chaotic behaviors that contrast with biological homeostasis, the observation that many biological systems prove remarkably robust in the face of changing external or internal conditions. Motivated by the subtle dynamics of cell activity in a crustacean central pattern generator, we propose a refinement of the notion of chaos that reconciles homeostasis and chaos in systems with multiple timescales. We show that systems displaying relaxation cycles going through chaotic attractors generate chaotic dynamics that are regular at macroscopic timescales, thus consistent with physiological function. We further show that this relative regularity may break down through global bifurcations of chaotic attractors such as crises, beyond which the system may generate erratic activity also at slow timescales. We analyze in detail these phenomena in the chaotic Rulkov map, a classical neuron model known to exhibit a variety of chaotic spike patterns. This leads us to propose that the passage of slow relaxation cycles through a chaotic attractor crisis is a robust, general mechanism for the transition between such dynamics, and we validate this numerically in other models. |
October 30, 2023