Mathematical and Computational Biosciences Collective Colloquium - Fall 2025
Seminars are typically held on Wednesdays from 1:00 - 2:00 PM as hybrid talks unless otherwise noted. The in-person presentation will take place in CKB 116 with a Zoom option for virtual attendees.
For questions about the seminar schedule, please contact James MacLaurin or Kristina Wicke.
Zoom link for seminars: https://njit-edu.zoom.us/j/96449729631?pwd=kx2veoFnO8lpLxDQ2LlpbLNBI2oaL...
September 17
Sage Chen, New York University (NYU)
Multiplicative feedback gating facilitates rapid learning and flexible computation in recurrent neural circuits
The mammalian forebrain is the seat of higher cognition with architectural parallels to modern machine learning systems. Specifically, the cortex resembles recurrent neural networks (RNNs) while the thalamus resembles feedforward neural networks (FNNs). How such architectural features endow the forebrain with its learning capacity, is unknown. Here, we take inspiration from empirical thalamocortical discovery and develop a multiplicative coupling mechanism between RNN-FNN architectures that collectively enhance their computational strengths and learning. The multiplicative interaction imposes a Hebbian-weight amplification onto synaptic- neuronal coupling, enabling context-dependent gating and rapid switching. Through a wide range of benchmark experiments on working memory, decision making, control, and pattern classification, we demonstrate that multiplicative gating-driven synaptic plasticity achieves 2-100 folds of speed improvement in supervised, reinforcement and unsupervised learning settings, boosting memory capacity, model robustness and generalization of RNNs. We further demonstrate the efficacy and biological plausibility of multiplicative gating in modeling four multiregional RNN-FNN circuits, including (1) a prefrontal cortex-mediodorsal thalamus network for context-dependent probabilistic decision making and context switching (Mukerjee et al. 2021 Nature; Wang et al. 2023 Nat. Commun.), (2) a cortico-thalamic-cortical network for working memory and attention (Panichello and Buschman, 2021 Nature), (3) a cerebellar-thalamic- cortical network for motor task switching (Pemberton et al., 2024 Nat. Commun.), and (4) an entorhinal cortex-hippocampus network for visuospatial navigation and sequence replay. Our model predictions not only validate various experimental findings reported independently from multiple species (rodent, monkey, human), multiple brain structures (MD thalamus, pulvinar, motor thalamus, hippocampal formation), and diverse tasks (cognitive, motor, navigation), but also provide experimentally testable hypotheses in neural perturbation studies.
September 24
David Liberles, Temple University
Patterns of Duplicate Gene Retention Over Different Timescales and With Different Selective Pressure
Gene duplication is an important process leading to gene content evolution in genomes. Three classes of models for gene duplicates will be described. The first model uses a Moran modeling framework to examine gene duplicates that are segregating in a population. Based upon the age of the duplicate and the frequency in the population, a test is performed to ask if the frequency is consistent with neutral processes or if there is evidence for selection on the duplicate itself.
A second modeling framework examines the interplay between dosage balance and functional evolution through subfunctionalization. The model suggests that from whole genome duplication events, dosage balance presents a selective barrier that must be overcome, but ultimately leads to enhanced retention of duplicates. For smaller scale duplicates, the reverse is true.
Lastly, it has been observed that some genes are more likely to be retained after duplication than other genes from a whole genome duplication event. A formal model based upon organismal gene content is presented that provides a time-dependent expectation for duplicate gene retention when different combinations of processes are acting. This modeling framework has been applied to sets of whole genome duplication events in fish and plants, characterizing both the gene contents and preservation processes that have acted.
Together these modeling frameworks provide community tools and present a picture of the processes acting on gene duplicates at different stages.
October 1
Alexa Aucoin and Jie Yang, NJIT Mathematical & Computational Biosciences Collective [Postdoctoral Researchers]
Alexa Aucoin
Supervised machine learning methods for time-frequency analysis of electrophysiological data
The amygdala responds to a large variety of socially and emotionally salient environmental and interoceptive stimuli. The context in which these stimuli occur determines their social and emotional significance. In previous studies we found that context, such as the presence of a social partner, induces persistent brain states that can be recovered from the spontaneous, inter-trial ring rate of neurons. Indeed, the spontaneous ring rates of neurons in the amygdala are di erent during blocks of gentle grooming touches delivered by a trusted social partner, and during blocks of non-social air ow stimuli delivered by a computer controlled air valve. Here, we examine the local field potentials (LFPs) recorded during periods of spontaneous activity to determine whether information about context can be extracted from these signals. We found that information about social vs. non-social context is present in the LFP during periods of spontaneous activity, as both classical and modern machine learning methods (SVM and CNN) can reliably decode context from spectrograms of spontaneous LFPs.
Unfortunately, while modern machine learning techniques like deep learning can accurately classify and decode neuronal data, even the simplest deep neural networks can be opaque. This lack of biological interpretability limits the scope and usefulness of deep neural networks for tasks like identifying neural correlates and generating testable hypotheses for future experiments. To address this challenge, we develop two novel shallow neural networks implementing data-driven time-frequency matched filters and compare them with the previously validated deep CNN model. In the context of our experimental data, we show that these shallow matched filter classifiers can be competitive with deeper networks while offering more accessible insights into the underlying neural dynamics.
Jie Yang
Parameter estimation and phenotype prediction from practical and minimal data
My original background is in molecular biology and cell signaling, and I gradually transitioned to computational biology during my work and graduate studies. In my PhD research, I developed a computational pipeline to generate virtual twins of human induced pluripotent stem cell-derived cardiomyocytes (iPSC-CMs), an in vitro model that has been increasingly adopted in cardiac disease and therapeutics research. While this platform is powerful and flexible, it is also limited by variability in iPSC donor genetic background and cardiac differentiation methods, often producing cardiomyocytes with immature, fetal-like electrophysiological phenotypes. Using an in silico dataset of voltage and calcium transient simulations along with a genetic algorithm for parameter estimation, I found that a dataset combining of electrical pacing, ion channel blockade, and changing buffer ion concentrations which sufficiently informed virtual twins to predict unseen ion channel block responses. Predictions of ion channel block response and arrhythmia susceptibility remained accurate when parameterizing on a normalized version of the dataset, suggesting that optical recordings may be used for parameterization where direct ionic current or voltage measurements are not readily available. Through this project, I became interested in parameter estimation in pharmacological and modulatory contexts. For my postdoctoral research, I aim to optimize models of neurophysiology incorporating modulator effects, as well as understand the relationship between parameter search frameworks and their performance on parameter constraint.
October 8
Jon Rubin, University of Pittsburgh
Mysteries of the deep: analyzing basal ganglia function and dysfunction
Making a decision that leads to an action is a fundamental behavioral function. In this talk, I will present work in three directions related to this function. The basal ganglia, a collection of neuronal populations deep in the core of the brain, is classically viewed as fundamental to this process, acting both as a gate that can block or stop action and as a site for reinforcement learning. In this talk, I will discuss three directions of analysis related to basal ganglia function and dysfunction. First, I will talk about our work to make sense out of how the complicated pathways in the basal ganglia interact to set a decision policy, to achieve decision outcomes, and to learn from feedback. Second, I will present results explaining surprisingly diverse responses to “inhibitory” signaling to basal ganglia outputs and their implications for parkinsonian pathological activity. Finally, I will share our efforts to develop a model that reflects biological heterogeneity in basal ganglia output neurons and to use it to understand striking results about motor recovery in parkinsonian mice. I will aim to emphasize some of the methods we use along the way, rather than specialized details about basal ganglia.
October 15
Ashok Litwin Kumar, Columbia University
Dynamics of nonlinear and multi-task recurrent neural networks
I will describe recent work we have done developing the theory of activity in recurrent neural networks. First, I will describe approaches we have developed for characterizing the geometry of this activity, specifically its dimensionality and timescales, and how these depend on the structure of the network. Next, I will describe a model we recently developed to describe how a single network can transition between states in which it produces different dynamics, each associated with a different subspace of neural activity space. I will relate the behavior of this model to observations of neural activity in animals performing different tasks
October 22
Maurizio Porfiri, New York University (NYU)
Scaling Laws in Living Social Systems
Scaling laws are ubiquitous in mechanics, from material strength to turbulence. A scaling law describes the behavior of a system through a power-law, connecting certain properties of the system with its size. Recent studies have identified surprising scaling relationships in living social systems, of which we presently lack a rigorous understanding. To make a first step towards a systematic methodology to unveil the mechanistic underpinnings of scaling laws in these complex systems, we tackle two seemingly different problems: scaling of metabolic rate of insect colonies with colony size and scaling of firearm ownership prevalence with city size. The first problem exemplifies a typical setup in laboratory research, where the researcher designs an experiment and formulates a mathematical model with a hypothesis to test in mind. Grounded in the hypothesis of reverse social contagion, we put forward an experimentally validated compartmental model for energy savings in insect colonies. The second problem exemplifies the less structured scenario in which only ecological data, with all their shortcomings, are available to the researcher. Working with multidimensional data collected on United States cities, we demonstrate the possibility of informing plausible modeling hypotheses through causal discovery and, consequently, formulating network-theoretic models.
October 29
Tatiana Engel, Princeton University
Brain-wide organization of neural computations at single-cell resolution
Understanding how distributed neural activity gives rise to behavior requires a unified view of the spatial and temporal organization of brain-wide computation. To address this challenge, the International Brain Laboratory recorded cellular-resolution spiking activity across the entire mouse brain during a perceptual decision-making task. Using this unique dataset, we mapped neural activity from over 600,000 neurons across nearly 300 brain areas, revealing how sensory, motor, and cognitive variables are represented and integrated throughout the brain. We further examined intrinsic timescales of single neurons across these regions, uncovering a brain-wide organization in which median timescales were up to fivefold longer in the midbrain and hindbrain than in the forebrain. The distribution of timescales across neurons revealed a multiscale architecture, in which fast timescales drove regional differences, while slow timescales followed a universal power law, consistent with dynamics at the edge of instability or chaos. These results outline the large-scale spatial and temporal structure of neural dynamics, providing a foundation for understanding how distributed circuits coordinate perception, decision, and action.
November 5
Marcelo Gehara, Rutgers University - Newark
Title/Abstract Forthcoming
November 12
Takuya Ito, IBM T.J. Watson Research Center
Title/Abstract Forthcoming
December 3
Emma Zajdela, Princeton University
Title/Abstract Forthcoming
Last updated: October 21, 2025