Contributed talk session: CT01

Monday, July 17 at 2:30pm

Contributed talk session: CT01

CARD Subgroup Contributed Talks

  • Erin Zhao Indiana University-Purdue University Indianapolis
    "Mathematical modeling of arterial occlusion"
  • Major arterial occlusion can lead to serious health complications such as peripheral arterial disease and ischemic stroke. Pre-existing collateral vessels provide alternate routes for blood flow when a major artery is occluded, thereby supplying oxygen to tissue regions downstream of an occlusion. However, the extent to which collateral vessels are able to compensate for a major arterial occlusion is not fully understood. Mathematical modeling is used in this study to determine the role of collaterals in flow compensation following occlusion and to elucidate which vascular response mechanisms facilitate the recruitment and dilation of collateral vessels. A wall mechanics model is developed and applied to a rat hindlimb model of femoral arterial occlusion and a human brain model of middle cerebral arterial occlusion. The model incorporates both acute vascular responses (e.g., immediate vessel constriction and dilation) and chronic vascular responses (e.g., arteriogenesis and angiogenesis). In the hindlimb model, dilation of the collateral vessel and arteriogenesis were predicted to be the most significant factors leading to increased flow following occlusion. The model is adapted to a simplified geometry of the cerebral circulation to assess the role of leptomeningeal collaterals in compensating for a middle cerebral artery occlusion. This model will yield predictions of blood and tissue oxygenation in the posterior, anterior, and middle cerebral regions and will be eventually compared to data on the collateral formation and infarct volume measured during acute ischemic stroke.
  • Keshav Patel University of Utah
    "A Spatially Averaged Model for Platelet Cohesion by Von Willebrand Factor and Fibrinogen"
  • High shear rate conditions in arterial blood flow result from several pathologies, such as cardiovascular disorders, cholesterol buildup, and stenosis. These conditions are correlated with an increased risk of thrombosis, mediated primarily by Von Willebrand Factor (vWF), a mechanically sensitive protein found in circulating blood and released by activated platelets. At high shear rates, vWF elongates from a globular state and reveals binding sites for platelet receptors, mediating both platelet aggregation and activation. Modeling vWF’s effect on aggregation with computational fluid dynamics has allowed researchers to understand relationships between physical and chemical processes with high detail. However, computational complexity limits our ability to examine various physiological conditions that could impact aggregation, like shear rate, vessel geometry, and injury size. In this talk, we will discuss steps toward building a spatially-averaged model of platelet aggregation to inform PDE models and efficiently determine essential parameters involved in aggregate formation. This purely dynamical systems framework incorporates an averaged flow through porous media, imparting a drag force that is balanced by crosslink bonds. These bonds are formed by vWF and fibrinogen and break in a force-dependent manner. Time-dependent simulations exhibit the development of a positive feedback loop, allowing for increased activation and binding to the aggregate. We will show how vWF significantly decreases the time under which platelet aggregates develop at higher shear rates. Finally, through steady-state analysis, we will highlight model predictions regarding the biological contexts under which vWF allows for platelet aggregation.
  • Michael Watson University of New South Wales
    "Journey to the Centre of the Plaque: A PDE Model of Necrotic Core Localisation in Atherosclerosis"
  • Atherosclerotic plaques are fatty, cellular lesions that form in major arteries. Rupture of mature plaques leads to heart attack and stroke. Plaques are initiated by lipid particles (“bad cholesterol”) that escape the bloodstream and deposit in the artery wall. Specialised immune cells called macrophages are recruited to ingest and remove these lipids. However, when lipid-loaded macrophages die in the artery wall, the plaque accumulates a dangerous necrotic core of lipid and cellular debris. Observations suggest that the necrotic core usually localises in the central or deeper regions of the plaque. This phenomenon is poorly understood because core formation depends upon a complex interplay between the macrophages and lipids in the plaque. In this talk, I will use a novel spatial PDE model to investigate necrotic core localisation in the atherosclerotic plaque. Using steady state analysis and numerical simulations, I will demonstrate the formation of a necrotic core in the model and discuss the factors that determine its profile and position. By identifying the biophysical and immunological mechanisms that lead to realistic simulations of core localisation, I aim to improve current biological understanding of plaque formation, growth, and progression.
  • Solomon Feuerwerker University of Vermont Larner College of Medicine
    "Parsing the effects of differential programmed cell death pathways on plaque stability in atherosclerosis using an agent-based model"
  • Introduction: Atherosclerotic plaque rupture is a critical step in the development of acute vascular events. They become unstable and prone to rupture when there is an increase in the necrotic composition of the plaque. This process involves a combination of systemic features, such as hyperlipidemia, that increase inflammation in the atheroma and accelerate necrotic core formation. While programmed cellular death pathways (PCDPs) play a critical role in the maintenance of homeostasis in living organisms, inflammation-propagating PCDPs, namely pyroptosis and necroptosis, have been implicated in the development of unstable plaques. Traditionally, these PCDPs have been primarily studied independently, but there is an increasing recognition of extensive crosstalk between pathways. We posit that modeling the differential role of PCDPs in an atherogenic environment, with an emphasis on the cell-death features of pyroptosis and necroptosis, can provide insight into the relative contribution of the respective PCDPs in the generation of unstable plaques. Methods: We created an agent-based model of an atheromatous plaque in a segment of arterial wall termed the Ruptured Unstable Plaque Programmed Cell Death Model (RUP-PCDM). The principal agents represented are endothelial cells, monocytes/macrophages and vascular smooth muscle cells. Rules for the three most commonly studied PCDPs, namely apoptosis, pyroptosis, and necroptosis, were extracted from available literature and implemented in the model. The effect of oxidated Low Density Lipoproteins (oxLDL) was simulated as the inflammatory driver in plaque development. Plaque instability was represented by the number of dead cells present within the necrotic core of plaques. The RUP-PCDM was initially calibrated by reproducing known development of unstable plaques reported in the literature. Next, variations in crosstalk rule parameters, including paracrine/autocrine signaling via Interleukin-1 (IL-1), Interleukin-18 (IL-18) and Tumor Necrosis Factor-alpha (TNFα), signaling through Toll-like Receptors (TLRs) and regulation of Nuclear Factor kappa-B (NFκB), were explored by calibrating to the differential effects of inhibition of pyroptosis and necroptosis as reported in distinct experiments. One publication reported a 69% reduction in plaque size by interrupting pyroptosis via knockouts of NLRP3 (1), and a separate publication reported a 68% reduction in the necrotic core with inhibition of necroptosis (2). Finally, the efficacy of dual-PCDP-targeting therapies was evaluated. Results: PCDPs were successfully implemented in the RUP-PCDM, with reproduction of the evolution of unstable plaques as represented by the aggregation of dead cells in the simulated necrotic core. Calibration simulations to fit independent experiments studying pyroptosis and necroptosis identified a set of parameter combinations across paracrine/autocrine PCDP signaling, control of TLR messaging, and NFκB expression which regulate crosstalk between pyroptosis and necroptosis. Inhibition of both pyroptosis and necroptosis fully mitigated plaque progression. Conclusion: The RUP-PCDM accurately replicates the evolution of an unstable atheromatous plaque, demonstrating the respective effects of different PCDPs on this process. The integrative nature of the RUP-PCDM provided the ability to reconcile distinct experiments that studied pyroptosis and necroptosis in isolation and produced a series of testable hypotheses regarding the crosstalk between these PCDPs. While interruption of pyroptosis and necroptosis individually did reduce the development of plaque instability, the redundancy between these two pathways limited the effectiveness of such interventions. Interrupting both pathways mitigated the development of the unstable plaque, suggesting the potential benefit of dual directed therapy. Future work will include expanding the representation of PCDPs to include autophagy and ferroptosis, as well as representing a larger portion of the pathogenic process of atherosclerotic plaque development. References: 1. Duewell, P., Kono, H., Rayner, K. et al. NLRP3 inflammasomes are required for atherogenesis and activated by cholesterol crystals. Nature 464, 1357–1361 (2010). 2. Karunakaran D, Geoffrion M, Wei L, Gan W, Richards L, Shangari P, DeKemp EM, Beanlands RA, Perisic L, Maegdefessel L, Hedin U, Sad S, Guo L, Kolodgie FD, Virmani R, Ruddy T, Rayner KJ. Targeting macrophage necroptosis for therapeutic and diagnostic interventions in atherosclerosis. Sci Adv. 2016 Jul 22;2(7):e1600224. doi: 10.1126/sciadv.1600224. PMID: 27532042; PMCID: PMC4985228.

CDEV Subgroup Contributed Talks

  • David Holloway British Columbia Institute of Technology
    "Size regulation of the lateral organ initiation zone and its role in determining cotyledon number in conifers"
  • Unlike most flowering plants, which have one or two cotyledons (embryonic leaves), many conifers form three or more cotyledons. These are arranged in a whorl, or ring, at a particular distance from the embryo tip. The number of cotyledons, nc, varies substantially within species, even in clonal cultures. nc variability reflects embryo size variability, with larger diameter embryos having higher nc. The radius of the whorl varies over three-fold for the naturally observed range of nc. In the model plant Arabidopsis, the initiation zone for leaf primordia occurs at a minimum between inhibitor zones of HD-ZIP III at the shoot tip and KANADI (KAN) encircling this farther from the tip. A similar mechanism is indicated in conifer embryos by the effects of overexpression of HD-ZIP III inhibitors on cotyledon formation. We developed a PDE model of HD-ZIP III / KAN spatial localization and used this to characterize the molecular regulation that could generate (a) the three-fold whorl radius variation (and associated nc variability) observed in conifer cotyledon development, and (b) the HD-ZIP III and KAN shifts induced experimentally in conifer embryos and in Arabidopsis. The model was used to find the sensitivity of mechanism components for positioning lateral organs (cotyledons or leaves) closer to or farther from the tip. Positional shifting is most readily driven by changes to the spatial extent of upstream regulator patterns and changes in HD-ZIP III / KAN mutual inhibition, and less efficiently driven by changes in upstream dosage or the activation kinetics of HD-ZIP III. Sharper expression boundaries can also be more resistant to shifting than shallower expression boundaries. The strong variability in whorl size reflected in conifer nc (commonly from 2 to 10) may indicate a freer variation in regulatory interactions, whereas flowering plants, with nc = 1 or 2, may require tighter control of such variation. The variability in conifer whorl size may have similarities to spatial scaling in fly embryos in which gene expression pattern variation compensates for embryo length variability. The model provides a framework for quantitative experiments on the positional control of lateral organ initiation. This could further understanding of the factors that control the leaf arrangements, or phyllotaxy, characteristic of species.
  • Merlin Pelz University of British Columbia
    "The Emergence of Spatial Patterns with Diffusion-Coupled Compartments with Activator-Inhibitor Kinetics in 1-D and 2-D"
  • Since Alan Turing’s pioneering publication on morphogenetic pattern formation obtained with reaction-diffusion (RD) systems, it has been the prevailing belief that two-component reaction diffusion systems have to include a fast diffusing inhibiting component (inhibitor) and a much slower diffusing activating component (activator) in order to break symmetry from a uniform steady-state. This time-scale separation is often unbiological for cell signal transduction pathways. We modify the traditional RD paradigm by considering nonlinear reaction kinetics only inside compartments with reactive boundary conditions to the extra-compartmental space which diffusively couples the compartments via two species. The construction of a nonlinear algebraic system for all existing steady-states enables us to derive a globally coupled matrix eigenvalue problem for the growth rates of eigenperturbations from the symmetric steady-state, on finite domains in 1-D and 2-D and a periodically extended version in 1-D. We show that the membrane reaction rate ratio of inhibitor rate to activator rate is a key bifurcation parameter leading to robust symmetry-breaking of the compartments. Illustrated with Gierer-Meinhardt, FitzHugh-Nagumo and Rauch-Millonas intra-compartmental reaction kinetics, our compartmental-reaction diffusion system does not require diffusion of inhibitor and activator on vastly different time scales. Bifurcation theoretic results for symmetric and asymmetric steady-state patterns obtained from our asymptotic theory are confirmed with full numerical PDE simulations. Our results reveal a possible simple mechanism of the ubiquitous biological cell specialization observed in nature.
  • Renee Dale Donald Danforth Plant Science Center
    "Competition for resources during semi-sequential growth of developmental units drive allometric patterns in the grass Setaria"
  • Resource allocation drives the above-ground distribution of mass in grass plants across discrete developmental units called phytomers. Although the number of phytomers varies in genetically-identical grasses, plants with relatively more phytomers may not exhibit an increase in total biomass or height. To understand what may be driving this, we tracked the growth of 30 S. italica plants from genotypes B100 and A10.1. We experimentally observed that plants from the genotype B100 had between 20 and 22 phytomers, while plants from the genotype A10.1 had between 7 and 9 phytomers. B100 plants with more phytomers (e.g., 22) did not grow taller or have more total leaf length, despite having more leaves than plants with fewer phytomers (e.g., 20). A10.1 plants with more phytomers (e.g., 9) did grow taller and had more total leaf length than those with fewer phytomers (e.g., 7). We developed a dynamical model to determine if these patterns are emergent from the underlying growth structure. The model is parameterized using the number of phytomers and related developmental time parameters: leaf emergence, stem and leaf elongation time, and time of panicle emergence. The model uses the semi-sequential nature of phytomer growth as its structure. The model predicts that the number of phytomers, the length of time each phytomer grows, and the shift to reproductive growth determines the allometric patterns. Together, model and data suggest that allometric patterns are driven by competition for resources between phytomers and the shift to reproductive growth in Setaria. These results are further developed using 5 additional Setaria genotypes, and through exploration of predicting developmental time parameters from growth curves obtained in high-throughput.
  • Samantha Ivings University of Sheffield
    "Connecting abstract maths to cell biology: A novel vertex model for predicting pluripotent stem cell lineage decisions, capturing cell symmetries through internal node dynamics"
  • Human pluripotent stem cells (hPSCs) are vastly promising for regenerative medicine, as they can self-renew indefinitely in vitro and become any cell type in the human body. Therefore, patients with damaged or lost tissue could be effectively treated by growing hPSCs into the required cell type for treatment. Unfortunately, how hPSCs make lineage decisions remains poorly understood. The process by which hPSCs acquire lineage is called differentiation, and a cell’s progress along this journey can be measured by its expression levels of genetic markers. At present, allowing cells to freely differentiate yields seemingly heterogeneous lineage patterning across unconstrained cultures. A ground-breaking study by Warmflash et al. (2014) showed that differentiating hPSCs spatially confined on discs yields reproducible lineage patterning across the cell culture, in the form of concentric rings. Later, Muncie et al. (2019) demonstrated reproducible patterning by differentiating hPSCs on a wider range of controlled geometries. How single hPSCs orchestrate these phenomena remains an interesting question. In this work, a novel cell-level modelling approach is proposed for explaining lineage patterning on controlled culture geometries. An undirected vertex model is introduced, where nodes represent single hPSCs. Cells are ‘neighbours’, and therefore share an edge, if their membranes are in contact. Nodes have internal dynamics in the form of an ordinary differential equation system, representing gene networks. These dynamics capture novel symmetry relations between nodes. Mimicking cell morphology in vitro, each node is represented by a Voronoi object which together tile the simulated geometry. Simulations of cell equilibria recapitulate experimental results, supporting that culture geometries may be designed to control lineage at low expense. The model is further developed by initialising a single cell with stochastic division until the culture is fully populated. Tracking the trajectory of each cell’s dynamics is hoped to provide insight into how different lineage paths are explored before each cell fully commits in the robust process of embryogenesis.

ECOP Subgroup Contributed Talks

  • Brendon McGuinness McGill University
    "Optimization of resource consumption traits shape community structure and the strength of niche and neutral processes in competitive communities"
  • Resource competition theory has overlooked the feedbacks that emerge from organisms shaping their environment through resource consumption, which in turn, shape plastic changes in traits in direct response to environmental variation. Community ecology has yet to integrate this feedback to predictions of community structure that include functional diversity and relative abundance distributions. Here we study how plasticity in resource consumption traits, defined by individual energy allocation constraints, shape community structure. We adopt a model incorporating plasticity in classic consumer-resource models where consumption strategies become dynamic state variables through optimizing organism growth by gradient ascent, underpinned by investment constraints in physiological machinery for acquisition of resources. Our results predict how plasticity in resource consumption strategies results in trait dynamics that let species avoid competitors while maximizing its efficiency on available resources. Interestingly this plastic optimization in even just one species in a community allows all other non-plastic species to coexist, a case of positive facilitation with pure competitive interactions. Additionally, we build a simple model based on plasticity maximizing resource uptake while minimizing competition that is predictive of relative species abundances so long as a geometric characteristic that we define as community supply vector distance is above a certain threshold. We study two cases, one where species consumption strategies are predictive of species abundances (niche), and a second in which consumption strategies are not predictive of species abundances (neutral). In the first case, we find that initial consumption strategies are more predictive of relative abundance distributions than equilibrium consumption strategies, highlighting the importance of the consumption strategy species have when colonizing a new environment in determining community structure when these traits are dynamic. Our study suggests that plasticity constrained by individual energy allocation provides a mechanism of facilitation that promote coexistence as well determines the relative strength of niche and neutral processes as drivers of community structure.
  • Farshad Shirani Georgia Institute of Technology
    "Competition, Phenotypic Adaptation, and the Evolution of a Species’ Range"
  • Geographic ranges of communities of species evolve in response to environmental, ecological, and evolutionary forces. Understanding the effects of these forces on species’ range dynamics is a major goal of spatial ecology. Previous mathematical models have jointly captured the dynamic changes in species’ population distributions and the selective evolution of fitness-related phenotypic traits in the presence of an environmental gradient. These models inevitably include some unrealistic assumptions, and biologically reasonable ranges of values for their parameters are not easy to specify. As a result, simulations of the seminal models of this type can lead to markedly different conclusions about the behavior of such populations, including the possibility of maladaptation setting stable range boundaries. In this talk, we present our works on harmonizing such results by developing and simulating a continuum model of range evolution in a community of species that interact competitively while diffusing over an environmental gradient. Our model extends existing models by incorporating both competition and freely changing intraspecific trait variance. We show that the spatial profile of species’ trait variance that is predicated by our model is consistent with experimental measurements available in the literature. Moreover, we show that our results reaffirm interspecific competition as an effective factor in limiting species’ ranges, even when trait variance is not artificially constrained.
  • Maud El-Hachem CSIRO (Commonwealth Scientific and Industrial Research Organisation)
    "Coexistence in two-species competition with delayed maturation"
  • Mortality caused by competition that occurs during maturation is explicitly modelled in some alternative formulations of the Lotka-Volterra competition model. We generalise this approach by using a distributed delay for maturation time. The distributed delay separates the mature from the immature individuals, to represent a species where competition is more important for immature individuals and where maturation time is long compared to lifetime. The resulting system of delay differential equations (DDEs) is transformed into a system of ordinary differential equations (ODEs) using the linear chain approximation. We show how the survival of a species depends on the rate of maturation being able to compensate for the rate of loss due to mortality of adults and immature individuals. A species fit for survival enters into competition with another species, leading to competitive exclusion, to stable coexistence of both species, or to unstable coexistence. We determine the stability conditions using the nullclines method and local stability analysis. The introduction of a distributed delay promotes coexistence and survival of the species compared to the limiting case of a discrete delay, potentially affecting management of relevant pests and threatened species.
  • Zhao (Wendy) Wang McGill University
    "Dynamics of a reduced gene regulation model with transport-driven state-dependent delay"
  • Time-delays arise naturally in biological systems involving transport processes. We study the dynamics of a scalar delay differential equation where the delay induced by transport depends on the state of the system, and is defined implicitly by a threshold condition. The model can be derived from an extended gene regulation model where we found that the inclusion of threshold state-dependent transcription and translation delay enriches the potential operon dynamics in contrast to models with constant delays. We systemically study the various cases when feedback and transport velocity are described by increasing or decreasing (or constant) Hill functions of the state variable. We also examine the stability and bifurcations of the steady states in a limiting case where the Hill function turns into a piecewise constant function. With constant transport velocity, the fold bifurcations always bound the stability of the steady state even in the presence of Hopf bifurcations, while the steady state when unique could lose stability in supercritical Hopf bifurcations. The dynamics with variable transport velocity is more interesting and complex due to the existence of high dimensional bifurcations of the steady states as well as periodic orbits. The steady state could undergo codimension-2 bifurcation such as fold-Hopf bifurcation and Bogdanov–Takens bifurcation that is associated with three codimension-1 bifurcations, Hopf bifurcation, fold bifurcation and homoclinic bifurcation nearby. Understanding the dynamics of the reduced scalar model may help to locate regions where interesting dynamics could occur for the full model.

IMMU Subgroup Contributed Talks

  • Benjamin Whipple University of Idaho
    "Regulation of CD8+ T cells may explain age dependent immune response to influenza infection"
  • Influenza viral infection is known to have more serious consequences on elderly populations. Previous modeling efforts for influenza infection have found differences in the immune response dynamics to influenza infection between young and aged mice. A better understanding of the immunological mechanisms by which aging leads to discrepant immune responses may inform treatment strategies. One possible explanation for these differences may be a difference between ages in the intensity of the activation of CD8+ T cell proliferation by the presence of virus. In order to further investigate this proposed mechanism and the difference in immune response dynamics, we consider several ordinary differential equation models describing the dynamics of CD8+ T cell and viral titer count. We apply these models to existing experimental data of viral titer and CD8+ T cell counts collected periodically over a period of 19 days from mice populations infected with influenza A/Peurto Rico/8/34 (H1N1). The models we consider are fit to our data by the differential evolution method for global optimization. After fitting the models, we use Akaike information criterion with small sample corrections in order to identify the model which best represents the considered data. Our chosen model differs from previously considered models by the inclusion of viral regulation of CD8+ T cells. We perform identifiability analysis of the selected model by considering loss profiles across the parameter search range. We identify that relationships between model parameters present challenges for model identifiability. We find that when clearance rate of virus by T cells is assumed to differ between populations then our model predicts two key differences in immune response dynamics. First, it predicts delayed proliferation response for the younger mice. Second, it predicts higher CD8+ T cell regulation by virus for the younger mice.
  • Bryan Shin University of Vermont Larner College of Medicine
    "Examining B-cell dynamics and responsiveness in different inflammatory milieus using an agent-based model"
  • Introduction: B-cells are essential components of the immune system that neutralize infectious agents through the generation of antigen-specific antibodies and through the phagocytic functions of naïve and memory B-cells. However, the B-cell response can become compromised by a variety of conditions that alter the overall inflammatory milieu, be that due to substantial, acute insults as seen in sepsis, or due to those that produce low-level, smoldering background inflammation such as diabetes, obesity, or advanced age. This B-cell dysfunction, mediated by the inflammatory cytokines Interleukin-6 (IL-6) and Tumor Necrosis Factor-alpha (TNF-α), increases the susceptibility of late-stage sepsis patients to nosocomial infections and increases the incidence or severity of recurrent infections, such as SARS-CoV-2, in those with chronic conditions. We propose that modeling B-cell dynamics can aid the investigation of their responses to different levels and patterns of systemic inflammation. Methods: The B-cell Immunity Agent-based Model (BCIABM) was developed by integrating knowledge regarding naïve B-cells, short-lived plasma cells, long-lived plasma cells, memory B-cells, and regulatory B-cells, along with their various differentiation pathways and cytokines/mediators. The BCIABM was calibrated to reflect physiologic behaviors to: 1) mild antigen stimuli expected to result in immune sensitization through the generation of effective immune memory, and 2) severe antigen challenges representing the acute substantial inflammation seen during sepsis, previously documented in studies on B-cell behavior in septic patients. Once calibrated, the BCIABM was used to simulate the B-cell response to repeat antigen stimuli during states of low, chronic background inflammation, implemented as low background levels of IL-6 and TNF-α often seen in patients with conditions such as diabetes, obesity, or advanced age. The levels of immune responsiveness were evaluated and validated by comparing to a Veteran’s Administration (VA) patient cohort with COVID-19 infection known to have a higher incidence of such comorbidities. Results: The BCIABM was successfully able to reproduce the expected appropriate development of immune memory to mild antigen exposure, as well as the immunoparalysis seen in septic patients. Simulation experiments then revealed significantly decreased B-cell responsiveness as levels of background chronic inflammation increased, reproducing the different COVID-19 infection data seen in a VA population. Conclusion: The BCIABM proved useful in dynamically representing known mechanisms of B-cell function and reproduced immune memory responses across a range of different antigen exposures and inflammatory statuses. These results elucidate previous studies demonstrating a similar negative correlation between the B-cell response and background inflammation by positing an established and conserved mechanism that explains B-cell dysfunction across a wide range of phenotypic presentations.
  • Quiyana M. Murphy Virginia Tech
    "Understanding Neutrophil Dynamics during COVID-19 Infection"
  • Infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) results in varied clinical outcomes, with virus-induced chronic inflammation and tissue injury being associated with enhanced disease pathogenesis. To determine the role of tissue damage on immune populations recruitment and function, a mathematical model of innate immunity following SARS-CoV-2 infection has been proposed. The model was fit to published longitudinal immune marker data from patients with mild and severe COVID-19 disease and key parameters were estimated for each clinical outcome. Analytical, bifurcation, and numerical investigations were conducted to determine the effect of parameters and initial conditions on long-term dynamics. The results were used to suggest changes needed to achieve immune resolution.

IMMU Subgroup Contributed Talks

  • Jason E. Shoemaker University of Pittsburgh
    "Sex-specific immunoregulation: Computational modeling approaches to determine why biological females may experience greater inflammation during influenza infection"
  • In humans, biological males and females often experience different outcomes during respiratory infections. Here, we are referring to differences in biological sex (XX and XY) and not gender, which includes behaviors and activities that are determined by society or culture in humans. During the 2009 H1N1 influenza pandemic, adult females were at greater risk than their male, age-matched counterparts for hospitalization and death. Many factors may determine sex-specific outcomes, but there is considerable evidence that sex-specific immune regulation is a key driver of enhanced disease pathology. Severe influenza infections are characterized with aggressive inflammatory responses, and studies show that male and female inflammatory responses differ when infected with a common virus. Yet, it remains unknown how the inflammatory differences emerge. In complex systems, a change in a single component’s behavior impacts the temporal response of other system components. Computational modeling is a powerful tool for determining how changes in the behavior of immune system components lead to changes in the overall system response, and computational modeling can consider multiple hypotheses on how sex-specific immune responses emerge. Our research team develops knowledge-based, mechanistic models of the lung immune system and then employs state-of-the-art optimization and Bayesian inference approaches to rigorously determine how the lung immune system is differently regulated between cohorts. These models can be constructed to consider several factors impacting sex-specific regulation simultaneously, including the effects of hormones and sex chromosome dependent gene regulation. Here, we will discuss our most recent effort where we constructed a computational model of the mouse lung innate immune system and used computational models to determine how the immune system is differently regulated in male and female mice infected with a pandemic H1N1 virus. Our results suggest that not only do the rates of key immune processes, particularly those associated with interferon induction, have to be different between males and females, but that the effectiveness of therapeutic intervention using anti-inflammatory compounds is also sex-specific. The model is now being expanded to include additional immune components and we are currently developing strategies for incorporating hormone regulation.
  • Elizabeth R Duke Fred Hutchinson Cancer Center
    "Intrahost mathematical modeling of CAR T cells for HIV cure"
  • The primary barrier to cure from human immunodeficiency virus (HIV) is a reservoir of long-lived, latently infected CD4+ T cells. This reservoir causes viral rebound when people with HIV stop taking daily antiretroviral therapy (ART). One approach to reducing viral rebound is to use T cells with HIV-specific chimeric antigen receptors (CAR T cells) to target and destroy reservoir cells after they activate. In a pilot study, four rhesus macaques (n = 4) infected with Simian-HIV (SHIV) were given a single infusion of CAR T cells during ART to induce post-rebound control after ART was interrupted. Macaques that received CAR T cells had a lower viral peak after analytical treatment interruption (ATI) than before compared to controls. To model this intervention, we first developed ordinary differential equations (ODE) to recapitulate viral loads during primary infection and post-ATI rebound in the control animals. Then, using viral parameter values from the control model, we fit candidate ODE models to plasma viral loads and the CD4+ and CD8+ CAR T cell measurements from SHIV-infected macaques that received CAR T cells. Using the best fitting version of the model, we found the parameter that modulates CAR T cell proliferation in response to SHIV correlated with significantly lower post-ATI viral peaks. We simulated the data-validated model for each macaque to find conditions in which the CAR T cell infusion achieved ART-free, SHIV remission. Although gene and cell therapy strategies for HIV cure are in the initial stages, mathematical modeling might accelerate the success of these approaches.
  • Vitaly V. Ganusov University of Tennessee
    "Using mathematical modeling to determine pathways of Mycobacterium tuberculosis dissemination in mice"
  • Tuberculosis (TB) remains a major disease of humans claiming lives of 1.6 millions in 2021. TB is caused by bacteria Mycobacterium tuberculosis (Mtb) that are transmitted by aerosol and initiate the infection in the lung. Over time, Mtb often disseminates from the initial infection site to other parts of the lung and in some cases, to extrapulmonary sites such as lymph nodes or spleen. In mice, infection with aerosolized Mtb also initially infects the lung but over time, Mtb is typically found in many extrapulmonary tissues such as lung-draining lymph nodes (LNs), spleen, or liver. The specific pathways of Mtb dissemination from the lung to other tissues, however, remain unclear. One study (Chackerian et al. 2002) measured dissemination of Mtb Erdman in two strains of mice (B6 and C3H) and suggested that Mtb first disseminates from the lung to the LNs and then to spleen and liver. We developed several alternative mathematical models describing how Mtb could disseminate from the lung to other tissues. Interestingly, we found that these data were insufficient to establish the Mtb dissemination pathway based on model fits; for example, the models in which Mtb spreads from the lung to LNs, and then from LNs for spleen/liver or from the lung to spleen/liver and then to LNs described the data with similar quality. However, the second model predicted extremely high rate of Mtb replication in the spleen/liver and high dissemination rate to the LNs; estimating these rates in future experiments may help falsify the model. The results were similar for two strains of mice (B6 and C3H). Interestingly, while Mtb causes stronger pathology in C3H mice, we also found that the rate of Mtb replication in the lung and other tissues were smaller in C3H mice than those in B6 mice. Our best models suggest that after lung infection, most Mtb (75% or more) exiting the lung disseminate to lung-draining LNs suggesting that control of Mtb replication in the LNs with an appropriate vaccine could be a strategy to prevent systemic dissemination of Mtb.
  • Alexis Hoerter Purdue University
    "Agent Based Model Investigating Latent and Naïve In Vitro M. Tuberculosis Infection Dynamics"
  • Prior to COVID-19, tuberculosis (TB) was the leading cause of death due to a single infectious agent – Mycobacterium tuberculosis (Mtb). The hallmark of Mtb infection is the formation of granulomas – unique microenvironments orchestrated by the immune response to contain Mtb and localize host-pathogen interactions. The host immune status is an important determinant in the formation of granulomas during Mtb infection. Approximately 90% of individuals infected with Mtb harbor granulomas that control bacterial spread, resulting in asymptomatic disease known as latent TB infection (LTBI). In vitro granuloma models have helped to understand granuloma development as they allow for highly controllable and high time-resolution investigations into granuloma formation. Specifically, an in vitro model that generates 3D granuloma-like structures through infection of human donor PBMCs with Mtb has shown that cells from LTBI donors better control Mtb growth compared to cells from naïve donors (those never exposed to Mtb before). But identifying mechanisms behind these differences is challenging, using experimental data alone. Here, we present a complementary approach using our agent-based model of these in vitro granulomas to help elucidate differences between LTBI and naïve host cell responses. Our computational model mimics Mtb infection through interactions between virtual macrophages, CD4+ T cells and Mtb. Mechanisms include Mtb growth, macrophage phagocytosis resulting in Mtb death or macrophage infection, macrophage and T cell activation, T cell proliferation, and cytokine/chemokine diffusion and degradation. The model is implemented using Repast Simphony. The model has been calibrated to published data from LTBI and naïve donor cells. Model outputs are calibrated to fall within A) 0.5-1.5x the experimental intracellular bacterial fold change for 3, 4, 5, 7, and 8 days post infection, B) 0.6-1.8x the experimental total cell fold change at day 7 post infection, and C) day of first granuloma formation (day 3 or 4 post infection for LTBI and day 5 or 6 post infection for naïve). We used Latin Hypercube Sampling (LHS) along with the Alternating Density Subtraction (ADS) method to perform iterative calibrations to identify a robust parameter region in which at least 75% of our parameter sets passed our criteria. We calibrate parameters for both LTBI and naïve datasets in parallel after the first LHS-ADS calibration iteration. Calibration was complete after 5 iterations with 89% and 84% parameter sets passing our criteria for LTBI and naïve groups respectively. Results show that starting at Day 2 post infection, LTBI-like simulations had a significantly higher number of activated TB-specific CD4+ T cells than the naïve-like simulations. This early activation of CD4+ T cells corresponded with an early increase in the number of total activated macrophages and activated infected macrophages in the LTBI-like simulations. Macrophage activation in the naïve group seemed to lag by approximately 3 days behind the LTBI group. Interestingly, the total number of infected macrophages was lower in the LTBI simulations, but despite less total infected macrophages throughout the infection, LTBI-like simulations controlled bacteria better than the naïve-like simulations having both less intracellular and extracellular bacteria by Day 8. Parameters that may have contributed to the quick activation of TB-specific CD4+ T cells and infected macrophages in LTBI-like simulations include lower CD4+ T cell deactivation probability and lower cytokine thresholds for macrophage activation. Our computational model, calibrated to LTBI and naïve experimental data, shows that the quick activation of TB-specific CD4+ T cells in LTBI-like simulations results in early and sustained activation of infected macrophages that leads to more bacterial control in LTBI-like simulations compared to naïve-like simulations. Despite having less overall infected macrophages, having a greater percentage of activated infected macrophages means that LTBI-like simulations can control Mtb infection better than naïve-like simulations.

MEPI Subgroup Contributed Talks

  • Christopher Mitchell Tarleton State University
    "Using Bayesian Methods to Infer Parameters for ODE Epidemic Systems"
  • Classical models of disease outbreaks rely on systems of nonlinear ordinary differential equations. ODE models have been widely successful and are credited with saving millions of lives worldwide. However, ODE models involve parameters that are often poorly understood and difficult to infer from limited and noisy data. This is especially problematic for rare, novel, or neglected diseases with unreliable reporting mechanisms. While some parameters can be deduced from biological or social facts, many must be inferred from data. Traditional least-squares point-estimates are fragile when applied to noisy data common in disease modeling. Bayesian inference replaces fragile point-estimates with posterior distributions that are more robust against data quality issues. Whereas point-estimate models produce a single outbreak forecast, Bayesian models generate an ensemble of forecasts through repeatedly sampling model parameters from their posterior distributions and numerically solving the resulting ODE. These multiple forecasts can be pooled and statistically analyzed at each time step (min, max, mean, etc) to give insight into potential outbreak scenarios (best-case, worst-case, most likely, resp). This project aims to create well-functioning ODE models using a new mathematical idea called amortized Bayesian inference implemented in the BayesFlow Python library. This exciting new tool was created in 2020 to help fight Covid-19 and other common diseases. This project will enhance the BayesFlow library to compensate for data quality issues and provide the improved models epidemiologists need to effectively fight NTDs.
  • Rodolfo Blanco-Rodriguez University of Idaho
    "Modeling the impact of contact network and viral evolution on lockdown and vaccination strategies"
  • The transmission of infectious diseases is heavily influenced by the network topology of the population through which the disease spreads. However, the effectiveness of control strategies in diverse and complex social structures is poorly understood. A key lesson learned with COVID-19 is that public health measures were very different from country to country. While some implemented strongly restrictive lockdowns while others focused on mass vaccination campaigns, leaving a concern about which strategies are most effective according to certain social structures. Additionally, during a pandemic, viral evolution forces us to reconsider our control strategies. To gain insight into this, we computationally analyzed the spread of two variants of a virus through three well-known network models, small-world networks (Watts-Strogatz), random networks (Erdös-Rényi), and scale-free networks (Barabási-Albert). We contrasted two lockdown policies in the different networks, one continuous and the other by intervals, and compared different vaccination strategies by varying the number of nodes vaccinated per day. Our results showed that scale-free networks reached the highest outbreak peaks and the peak values of the infected population were independent of the infectivity of the virus variant. In addition, in this scale-free network, lockdown and vaccination strategies decreased the number of infected regardless of the strategy considered. Furthermore, for the small-world or random networks we found that confinement for 15-day intervals can reduce the peak number of infected nodes. Interestingly, we also found that a 20-day vaccination campaign yields lower infected cases than a 10-day campaign. These findings underscore the importance of considering social network topology to predict the course of epidemics and design more effective control measures.
  • Soyoung Kim National Institute for Mathematical Sciences
    "Optimal antiviral stockpile for influenza pandemic"
  • A stockpile of antiviral drugs is important for mitigating a novel influenza pandemic. Recently, intervention strategies against such a pandemic have improved significantly, affecting the required size and composition of the stockpile. Our goal is to estimate the optimal ratio of conventional to newer antiviral drugs. Epidemic parameters are estimated from daily-case data about H1N1pdm09 in the Republic of Korea, and used a deterministic ordinary differential equation model and stochastic simulation to predict the number of patients in a future pandemic. An antiviral stockpile containing neuraminidase inhibitors and a new drug, cap-dependent endonuclease inhibitor are considered, seeking the optimum ratio of the two drugs under different epidemiological and economic assumptions.
  • Zitao He University of Waterloo
    "Predicting measles outbreaks from vaccine sentiments on social media"
  • Vaccinating decisions for childhood diseases such as measles have been studied with coupled models of disease dynamics and individual behaviors. Such models applied evolutionary game theory to model people's vaccination strategies and how they are affected by social norms. Recent studies incorporated bifurcation theory to identify early warning signals (EWS), such as increasing variance and lag-1 autocorrelation, of critical transitions. Deep learning models have also been used to classify different types of bifurcation from time-series data near a tipping point. On the other hand, the rise of social media has made it easier for pro- and anti-vaccine sentiments to spread. Researchers have used high-frequency social media data to study population dynamics of infectious diseases and have found critical slowing downs in geocoded Tweets about the MMR vaccine and measles-related Google searches before the 2014-15 Disneyland, California measles outbreak. This project aims to use deep learning models to predict potential disease outbreaks from social media data and provide novel methods for disease outbreak prediction near thresholds. We improved existing coupled behavior-disease models by considering different social groups and investigating the effect of homophily on disease dynamics near tipping points. We showed that a deep learning model, such as a CNN-LSTM framework, could capture EWS directly from real-world social media data and predict a potential disease outbreak in the future.

MFBM Subgroup Contributed Talks

  • Connor McShaffrey Indiana University Bloomington
    "Decomposing Viability Space"
  • When trying to model how an organism will fare in a particular environment, we need to be able to capture the potential dynamics of the system as well as the survival outcomes they lead to. A growing body of work has been approaching this problem by building dynamical systems models with imposed viability limits, which separate living and terminal states. Since the viability limits are not implicit in the equations that govern the dynamics, there is no guaranteed correspondence between the phase portrait and which initial conditions will remain viable. This means that the topology demands a richer set of analyses, which we refer to as characterizing viability space. Here we will set the groundwork for this methodology, including criteria for novel bifurcations, using a simple mass-action protocell model.
  • Holly Huber University of Southern California
    "Systematic Bayesian Posterior Analysis Facilitates Hypothesis Formation and Guides Investigation of Pancreatic Beta Cell Signaling"
  • Intracellular protein dynamics can be simulated with a system of ordinary differential equations, where parameters represent reaction rate constants and initial protein concentrations. Such mechanistic models formalize the biological hypotheses they’re based on. Bayesian inference fits the posterior distribution of model parameters to data; evidence of alternative hypotheses can manifest in the posterior and serve as a starting point for hypothesis refinement. However, existing approaches to search for such evidence are largely ungeneralizable and unsystematic, limiting their scalability. Here, we show that ranking marginal posteriors by information gained from experimental data provides a systematic and generalizable way to search for alternative hypothesis evidence. Rather than searching for evidence at random, one can search per the ranking. We subsequently use this approach to refine our understanding of pancreatic beta cell signaling dynamics, which regulate beta cell proliferation.
  • Nathaniel Linden University of California San Diego
    "Multimodel modeling for blood glucose and insulin measurements in diabetes"
  • Uncertainty in a model formulation due to differing assumptions or unknown system mechanisms is often overlooked when applying mathematical models in biology and medicine. In diabetes diagnostics, mathematical models have long been used to make inferences about a patient’s metabolic health using available clinical data such as blood glucose measurements over time. These approaches often rely on a phenomenological model to approximate the physiological system, ignoring possible uncertainty in the model structure. However, there are usually several possible phenomenological models, each of which uses different formulations to represent the same biological processes. Given a family of phenomenological models, one typically chooses a single model based on a priori assumptions. In this work, we instead focus on leveraging the whole family of models to develop robust predictors in the face of uncertainty in the models describing the biological process. We explore several approaches to average the predictions from all available models, including Bayesian model averaging and probability distribution fusion. These methods allow us to construct robust predictors using the entire model family and reduce biases associated with choosing a single best model. As a test case, we chose the prediction of beta cell insulin regulation and associated diagnostic metrics from blood glucose measurements. Our results show that working with a family of models instead of a single model improves the certainty of modeling-based predictions, reduces biases associated with selecting one model, and explicitly accounts for model uncertainty.
  • Robert DeJaco National Institute of Standards and Technology
    "Reducing Bias and Quantifying Uncertainty in Fluorescence Produced By PCR"
  • We present a new approach for relating nucleic-acid content to fluorescence in a real-time Polymerase Chain Reaction (PCR) assay. By coupling a two-type stochastic branching process for PCR with a fluorescence analog of Beer’s Law, the approach reduces bias and quantifies uncertainty in fluorescence. As the two-type branching process distinguishes between complementary strands of DNA, it allows for a stoichiometric description of reactions between fluorescent probes and DNA and can capture the initial conditions encountered in assays targeting RNA. Analysis of the expected copy-number identifies additional dynamics occurring at short times (or, equivalently, low cycle numbers), while investigation of the variance reveals the contributions from liquid volume transfer, imperfect amplification, and strand-specific amplification (i.e., if one strand is synthesized more efficiently than its complement). Linking the branching process to fluorescence by the Beer’s Law analog allows for a more objective and a priori description of background fluorescence. It also enables uncertainty quantification (UQ) in fluorescence which, in turn, leads to analytical relationships between amplification efficiency (probability) and limit of detection. This work sets the stage for UQ-PCR, where both the input copy-number and its uncertainty are quantified from fluorescence measurements.

ONCO Subgroup Contributed Talks

  • Daniel Glazar Moffitt Cancer Center & Research Institute
    "A simulation-based sample size analysis of a joint model of longitudinal and survival data for patients with glioma"
  • Introduction Patients with recurrent high-grade glioma (rHGG) have poor prognosis with median progression-free survival (PFS) <6 months, and median overall survival <12 months [1]. The Response Assessment in NeuroOncology (RANO) defines radiographic progression as 25% increase in the sum of products of longest diameters of individual lesions (SPD) delineated from MRIs relative to minimum observed SPD [2]. However, there is a wide heterogeneity in response to treatment in these patients with some experiencing disease progression within weeks and others surviving for years. Predicting which patients will progress early or late on different therapeutic regimens may aid the clinician in deciding which regimen with which to treat the patient. Thus, there is an urgent clinical need for a predictive biomarker for patient-specific PFS. To predict PFS and OS, baseline disease characteristics, such molecular markers have been investigated [3]. However, repeated measures of such biomarkers are infeasible due to the invasive procedures involved. To remedy this limitation, predictive mathematical models of patient-specific tumor dynamics in glioma based on non-invasive imaging data have been developed [4–8]. However, most of these do not evaluate PFS by RANO, and those that do only predict PFS as a binary outcome over discrete time horizons [6]. Considering a joint model of longitudinal tumor volume and PFS from a Bayesian perspective has the advantage of predicting PFS as a continuous outcome over continuous time horizons for individual patients while taking into account population-level response dynamics [9–11]. However, before applying such a model to clinical data, we note that due to low incidence and accrual rates of early phase clinical trials for patients with rHGG [12], there is a clinical need to determine the minimum sample size necessary to predict patientspecific TTP using longitudinal tumor volumes. As such, we perform a simulation-based sample size analysis of a joint model of longitudinal tumor volume and PFS on an in silico clinical trial for patients with rHGG. Objectives 1. To develop a joint model of longitudinal tumor volume and PFS for patients with rHGG. This joint model will use tumor volume dynamics as an accurate and precise predictive biomarker of PFS. 2. To perform a sample size analysis on an in silico clinical for patients with rHGG to determine the minimum number of patients needed to make accurate and precise individual Bayesian dynamic predictions of PFS based on longitudinal tumor volumes in order to inform clinical trial design. Methods 1. We developed a joint model of longitudinal tumor volume and TTP for patients with rHGG. Tumor volumes were modeled using a tumor growth inhibition (TGI) model with mixed effects. In it, tumors grow exponentially to replicate the fact that patients inevitably progress before any inflection point in their tumor growth. Tumor response rate declines exponentially due to the inevitable development of therapeutic resistance. TTP was then modeled discretely and defined as the time when tumor volume reached 40% above from nadir, extrapolating from the bi-dimensional 25% threshold in RANO [2]. Due to technical limitations, the hazard function was approximated using a scaled skew normal distribution curve. Population parameter estimates of the developed joint model were estimated using quantile information reported in the literature [13] by maximizing the likelihood of joint order statistics [14]. 2. An in silico clinical trial was conducted to study the effects of sample size on the predictive performance of the developed joint model to dynamically predict patient-specific PFS. We selected sample sizes of 40, 60, 80,..., 200 as most clinically feasible due to the low incidence of rHGG. In silico training and test sets were generated by sampling from model parameter distributions and simulating longitudinal tumor volume and TTP every 6 weeks from treatment initiation with baseline 2 weeks prior to treatment initiation to conform with rHGG clinical trial protocols. Population parameters were estimated using the stochastic approximation of expectation-maximization (SAEM) algorithm [15] and then taken to parameterize a prior distribution to dynamically predict patient-specific TTP for the test patients across landmark times and time horizons [9–11]. Predictive performance was then evaluated using time-dependent Brier score (BS) and area under the receiver operating characteristic curve (AUC) [9,16]. Simulations were performed using the Monolix suite software [17]. Results 1. In estimating population parameters, simulated median PFS was between 24 and 30 weeks, which agrees with clinical observations [6]. Also, the majority of simulated patients had between 3 and 7 observations, also in agreement with clinical observations [6]. 2. For the largest sample size considered (N=200), we evaluated the developed model’s predictive performance for set time horizons of 6, 12, 18, and 24 weeks across all landmark times. The median AUC ranged from 0.55 to 0.63 with median BS ranging from 0.19 to 0.25. We then evaluated the developed model’s performance to predict progression around the median PFS (between 24 and 30 weeks) across landmark times 0, 6, 12, and 18 weeks after treatment initiation. The median AUC ranged from 0.50 to 0.65 with median BS ranging from 0.21 to 0.25. 3. Across all sample sizes tested, there was statistically significant albeit small correlation between sample size and AUC (Pearson’s r=0.20, p<1e-15) across all landmark times and time horizons. However, no statistic significance was reached for the correlation between sample and BS (Pearson’s r=-0.0017, p=0.95). When controlling for landmark time and time horizon, the median Pearson’s correlation between sample size and either AUC or BS were r=0.28, 0.07, respectively. Conclusions We developed a joint model of longitudinal tumor volume and PFS for patients with rHGG and parameterized according to quantile information reported in the literature. The model was able to capture the dynamics and survival profiles of the patient population. The predictive performance of the model was robust across the sample sizes tested. However, the overall predictive performance of the model was only marginally better than chance as measured by AUC and BS. In future studies, we will explore a larger range of sample sizes to investigate how many patients are necessary to meet various performance benchmarks as measured by AUC or BS. To improve model predictive performance, we will include covariates, such as sex, molecular markers, and different treatment arms, which are known to affect rHGG volume dynamics and survival endpoints. We will also consider competing risks in the form of a smooth, continuous hazard function as many patients with rHGG experience progression due to clinical deterioration or new lesion even while exhibiting radiographic response. References: [1] Birzu C, French P, Caccese M, et al. Recurrent Glioblastoma: From Molecular Landscape to New Treatment Perspectives. Cancers. 2021; 13(1):47. [2] Wen PY, Chang SM, Van den Bent MJ, et al. Response Assessment in Neuro-Oncology Clinical Trials. J Clin Oncol. 2017; 35(21):2439-2449. https://doi:10.1200/JCO.2017.72.7511. [3] Phillips HS, Kharbanda S, Chen R, et al. Molecular subclasses of high-grade glioma predict prognosis, delineate a pattern of disease progression, and resemble stages in neurogenesis. Cancer Cell. 2006; 9(3):157-173. [4] Swanson KR, Bridge C, Murray JD, et al. Virtual and real brain tumors: using mathematical modeling to quantify glioma growth and invasion. Journal of the Neurological Sciences. 2003; 216(1):1-10. [5] Br¨uningk SC, Peacock J, Whelan CJ, et al. Intermittent radiotherapy as alternative treatment for recurrent high grade glioma: a modeling study based on longitudinal tumor measurements. Sci Rep. 2021; 11:20219. [6] Glazar DJ, Grass GD, Arrington JA, et al. Tumor Volume Dynamics as an Early Biomarker for Patient-Specific Evolution of Resistance and Progression in Recurrent High-Grade Glioma. J Clin Med. 2020; 9(7):2019. https://doi: 10.3390/jcm9072019. [7] Hormuth DA., Al Feghali KA, Elliott AM. et al. Image-based personalization of computational models for predicting response of high-grade glioma to chemoradiation. Sci Rep 2021; 11:8520. [8] Dean JA, Tanguturi SK, Cagney D, et al. Phase I study of a novel glioblastoma radiation therapy schedule exploiting cell-state plasticity. Neuro-Oncology. 2022. [9] Rizopoulos D. Dynamic predictions and prospective accuracy in joint models for longitudinal and time-to-event data. Biometrics. 2011; 67(3):819–29. [10] Mbogning C, Bleakley K, Lavielle M. Joint modelling of longitudinal and repeated time-to-event data using nonlinear mixed-effects models and the stochastic approximation expectation maximization algorithm. J Stat Comput Simul. 2015; 85(8):1512–28. [11] Desm´ee S, Mentr´e F, Veyrat-Follet C, et al. Nonlinear joint models for individual dynamic prediction of risk of death using Hamiltonian Monte Carlo: application to metastatic prostate cancer. BMC Med Res Methodol. 2017;17:105. [12] Central Brain Tumour Registry of the United States. CBTRUS Statistical Report: Primary brain and central nervous system tumours diagnosed in the United States in 2004–2006. 2010. [13] Stensjøen AL, Solheim O, Kvistad KA, et al. Growth dynamics of untreated glioblastomas in vivo. Neuro-Oncology. 2015; 17(10):1402–11. [14] Reiss RD. Approximate Distributions of Order Statistics With Applications to Nonparametric Statistics. Springer New York. 2012. [15] Delyon B, Lavielle M, Moulines E. Convergence of a stochastic approximation version of the EM algorithm. Ann Stat. 1999; 27:94–128. [16] Schoop R, Graf E, Schumacher M. Quantifying the predictive performance of prognostic models for censored survival data with time-dependent covariates. Biometrics. 2008; 64(2):603–10. [17] Monolix Suite 2023R1, Lixoft SAS, a Simulations Plus company.
  • Guillermo Lorenzo University of Pavia, Italy
    "Personalized MRI-informed predictions of prostate cancer growth during active surveillance"
  • Active surveillance (AS) an established clinical management option for low to intermediate-risk prostate cancer (PCa), which represents almost 70% of newly-diagnosed cases. During AS, patients have their tumor monitored via multiparametric magnetic resonance imaging (mpMRI), serum prostate-specific antigen (PSA), and biopsies. If any of these data reveal tumor progression towards an increased clinical risk, the patient is prescribed a curative treatment. However, clinical decision-making in AS is usually guided by observational and population-based protocols that do not account for the unique, heterogenous nature of each patient’s tumor. This limitation complicates the personalization of monitoring plans and the early detection of tumor progression, which constitute two critical, unresolved problems in AS. To address these issues, we propose to forecast PCa growth using personalized simulations of an mpMRI-informed mechanistic model solved over the 3D anatomy of the patient's prostate. We describe PCa growth via the dynamics of tumor cell density with a diffusion operator, representing tumor cell mobility, and a logistic reaction term, which accounts for tumor cell net proliferation. Model calibration and validation rely on assessing the mismatch between model predictions of the tumor cell density map with respect to corresponding mpMRI-based estimates. Here, we present a preliminary study of our predictive technology in a small cohort of newly-diagnosed PCa patients (n=11) who enrolled in AS and had three mpMRI scans over a period of 2.6 to 5.6 years. Our results show a median concordance correlation coefficient (CCC) and Dice score (DSC) of 0.59 and 0.79, respectively, for the spatial fit of tumor cell density during model calibration using two mpMRI datasets. Then, model validation at the date of a third mpMRI scan resulted in median CCC and DSC of 0.58 and 0.76, respectively. Additionally, the global CCCs for tumor volume and total tumor cell count were 0.87 and 0.95 during model calibration and of 0.95 and 0.88 at forecasting horizon, respectively. Thus, while further improvement and testing in larger cohorts are required, we believe that our results are promising for the potential use of our methods to personalize AS protocols for PCa and predict tumor progression.
  • Jason T. George Texas A&M University
    "Physical Modeling of the T Cell-Peptide Interaction: Toward Predictive T Cell Immunotherapy"
  • Despite substantial research activity, reliable and computationally efficient prediction of therapeutically relevant T cell receptor (TCR)-antigen pairs remains elusive owing to limited availability of training data and the formidable dimensionality of TCR and antigen sequence space. Successful prediction, if possible, would open new fields of research, from a systematic understanding of the adaptive immune response to viral adaptation to the rapid and optimal identification of tumor antigen-specific T cells. This talk will detail our recent probabilistic modeling efforts that characterize a post-selection T cell repertoire’s ability to identify foreign antigenic signatures with a high degree of specificity. This modeling framework is then applied to construct a data-driven inferential statistical model trained on primary TCR and peptide primary sequences along with crystal structures of known strong binding TCR-peptide pairs. When restricted to a common major histocompatibility complex allele variant, we demonstrate that this approach successfully identifies therapeutically relevant TCR-peptide pairs with a high degree of sensitivity and specificity. Lastly, the trained model is applied to TCRs derived from the peripheral blood of AML patients with the ultimate goal of rapid in silico prediction of tumor antigen-specific T cells.
  • Jeffrey West Moffitt Cancer Center
    "Markov models predict minimal residual disease in Adult B-Lymphoblastic Leukemia"
  • Cancer stem cells (CSCs) are hypothesized to promote tumor progression through innate chemoresistance and self-renewal. While ostensible CSCs were first identified via CD34+/CD38- immunophenotyping in acute myeloid leukemia, the temporal variation of CD34 and CD38 expression in B-lymphoblastic leukemia (B-ALL) complicates the search for CSCs in this setting. We present a Markovian mathematical model which combines the concept of CSCs with pattern of B-ALL subpopulation at diagnosis to demonstrate dynamic phenotypic alteration and predict minimal residual disease (MRD). Qualitative flow cytometry performed on diagnostic bone marrow was used to determine the proportions of CD34+/CD38+, CD34+/CD38-, CD34-/CD38+, and CD34-/CD38- cells in 44 patients with B-ALL. An iterative numerical search procedure was used to derive patient-specific Markov matrices, describing the stochastic cell state transitions. Then, these patients were divided into MRD positive (n=9) and MRD negative (n=35) cohorts to compare transition matrix features. Among all patients with adequate (> 3 years) follow-up, all MRD positive patients experienced relapse within 3 years, whereas only 31.3% of MRD negative patients (11/35) experienced relapsed B-ALL. A higher transition probability to CD34+CD38- from CD34+CD38+ was associated with positive MRD . In contrast, higher transition probabilities toward a CD34-CD38+ phenotype strongly favor negative MRD status, irrespective of the starting phenotype of other three patterns (p=0.00286, 0.00112, 0.000494, 0.000915, respectively). Combining these parameters into a simple predictive model achieves a sensitivity of 44.4% and specificity of 97.1% for MRD in this setting. Thus, Markov modeling proves useful in assessment of cell state dynamics in patients with B-ALL, especially to predict MRD and a higher relapse rate of disease.

ONCO Subgroup Contributed Talks

  • Chloe Colson University of Oxford
    "Investigating the impact of growth arrest mechanisms on tumour responses to combinations of radiotherapy and hyperthermia"
  • Developing targeted and effective treatment strategies for patients is at the forefront of cancer research today. Recently, there has been increased focus on combination therapies, which aim to exploit different anti-cancer effects while minimising detrimental side-effects. Hyperthermia (HT) is a promising candidate for enhancing the efficacy of radiotherapy (RT). However, incomplete understanding of their interactions is hindering their combined use in the clinic. In this work, we investigate tumour responses to fractionated RT and HT individually and in combination, focussing on how two different mechanisms for growth control may impact tumour sensitivity to these treatments. To do so, we extend an existing ordinary differential equation model of tumour growth, which distinguishes between growth arrest due to nutrient insufficiency and competition for space and exhibits three growth regimes: nutrient limited (NL), space limited (SL) and bistable (BS), where both mechanisms for growth arrest coexist. We adopt a time-dependent description of RT that accounts for two different types of tumour cell damage (sub-lethal vs. lethal), damage repair and cell death following insufficient repair. We also model the time-dependent effects of HT, distinguishing between mild HT, which re-oxygenates the tumour via vasodilation and a reduction in the oxygen consumption rates of tumour cells, and, high HT, which causes tumour and endothelial cell damage and an associated angiogenic response. We construct three virtual populations of tumours in the NL, SL and BS regimes and, for each population, we study tumour responses to conventional fractionation schedules for RT, HT, and combined RT and HT. We determine the average response in each regime and assess the biological processes that may explain positive and negative treatment outcomes. We also investigate the impact of the fraction dose and dosing frequency on tumour responses. This allows us to evaluate which treatment (RT, HT or RT+HT) and dosing regimen maximise the reduction in tumour burden. We find that mild HT cannot significantly enhance the tumour responses to RT. In particular, while the addition of mild HT leads to a modest improvement in the RT response of tumours in the SL regime, it typically worsens the RT response of tumours in NL and BS regimes. By contrast, high HT can act as a potent radiosensitiser for all tumours in each virtual cohort, provided that the HT-induced angiogenic response is sufficiently weak. When there is a strong HT-induced angiogenic response, high HT enhances the RT response of a few tumours in the NL and SL regimes, while it worsens the RT response of most tumours in the BS regime. Finally, in cases where combination treatments are advisable, higher doses of HT applied with lower doses of RT maximise treatment efficacy.
  • Louis Kunz Massachusetts General Hospital and Harvard Medical School
    "AMBER (Agent-based Modeling of Biophysical Evolution after Radiotherapy): a computational tumor model as the first step to simulate radiation response."
  • Intro: Cancer evolution and responses to radiotherapy treatments depend on a complex interplay among physical, chemical, and biological processes. In order to capture this and optimize treatments, novel strategies are required. This work aims to develop a multiscale model capable of predicting tumor fates using a bottom-up approach. While existing tumor models address radiation delivery using the linear-quadratic response to radiation, mechanistic models connecting cell- with tumor-scale radiation effects are yet scarce. Our model aims to incorporate radiosensitivity-modifier factors such as angiogenesis, different cell types, hypoxic and necrotic regions, and oxygen concentration. Additionally, our model is intended to include radiation dose in different areas of the tumor calculated through the TOPAS Monte Carlo toolkit. Methods: We developed a hybrid model including agent-based components and continuum-like aspects in a voxelized geometry. Our model uses tumor mechanical and vascular properties and simulates tumor growth and necrosis evolution following a set of mechanistic rules for (i) cell proliferation and migration, (ii) oxygenation, (iii) angiogenesis, and (iv) cell fates, as follows: (i) within each voxel, cells reproduce with division times following a Gamma distribution and diffuse to neighboring voxels, depending on the crowding degree in each voxel. (ii) A pre-generated healthy vasculature provides oxygen to each voxel at the beginning of the simulations. To decide the oxygen each cell takes, we performed sub-voxel size simulations depending on crowding and the density of vessels to calibrate a Beta distribution representative of the cell-wise oxygen distribution within a voxel. (iii) If a cell is not supplied with enough oxygen, it releases VEGF, which triggers angiogenesis. Tumor-induced angiogenesis replaces healthy vasculature through a directed random walk considering crowding and VEGF gradient. The step size in the random walk also depends on local pressure and VEGF gradient. When the step size reaches a lower-bound threshold, the vessel stops growing. The radius of each vessel is updated using Murray’s law with a linear correction for the pressure exerted by the cells. (iv) Finally, at each time step, a simple vitality function determines if cancer cells are cycling, quiescent, or going through apoptosis/necrosis. Results: Our model is capable of mimicking the genesis and early development of a generic tumor, including a necrotic core. The order of magnitude of the highest microvascular density (h-MVD) in our simulations matches values found from in vivo measurements done by Forster et al. [1] of 20-200 mm-2. Our simulated tumors show chronic hypoxia due to the intra-voxel oxygen distribution and acute hypoxia resulting from increased pressure due to tumor growth leading to vasculature damage. The density of cells in the tumor center and the tumor radius increase exponentially until reaching a plateau caused by insufficient oxygen supply. At this point, the simulation of angiogenesis is necessary to allow for further tumor growth. If angiogenesis is turned off, the cells stop dividing and go through necrosis due to the lack of oxygen supply, leading to a shrinkage of the tumor. When adding angiogenesis, the oxygen concentration reaches a stable value allowing for the number of cancer cells and the radius of the tumor to keep increasing linearly. The pressure in the center of the tumor acts as a barrier to angiogenesis, creating a necrotic core. Conclusion: We present AMBER, a flexible and modular model simulating the evolution of a tumor. To match the observations described earlier with in-vivo data corresponding to the anatomical region and the type of cancer of interest, one should try to tune the model parameters linked to angiogenesis and cell-specific characteristics, such as VEGF production rate, cell doubling time, necrosis threshold or necrosis probability. Our model is designed to account for radiation dose by the future addition of radiation bioeffect models. Its design will allow for a wide range of potential applications in radiotherapy, including external beam as well as radiopharmaceutical therapies or brachytherapy. The model's adaptability will allow for seamless integration with existing models for radiation delivery, such as TOPAS and TOPAS-nBio, and with models for DNA damage and DNA repair. [1] Forster, Jake, Wendy Harriss-Phillips, Michael Douglass, and Eva Bezak. “A Review of the Development of Tumor Vasculature and Its Effects on the Tumor Microenvironment.” Hypoxia Volume 5 (April 2017): 21–32.
  • Noemi Andor Moffitt Cancer Center
    "Modeling competition between subpopulations with variable DNA content in resource limited microenvironments"
  • Resource limitations shape the outcome of competitions between genetically heterogeneous pre-malignant cells. One example of such heterogeneity is in the ploidy (DNA content) of pre-malignant cells. A whole-genome duplication (WGD) transforms a diploid cell into tetraploid one and has been detected in 28-56% of human cancers. If a tetraploid subclone expands, it consistently does so early in tumor evolution, when cell density is still low and competition for nutrients is comparatively weak – an observation confirmed for several tumor types. WGD+ cells need more resources, to synthesize increasing amounts of DNA, RNA and proteins. To quantify resource limitations and how they relate to ploidy we performed a PAN cancer analysis of WGD, PET/CT and MRI scans. Segmentation of >20 different organs from >900 PET/CT scans was performed with MOOSE. We observed a strong correlation between organ-wide population-average estimates of Oxygen and the average ploidy of cancers growing in the respective organ (Pearson R = 0.66; P= 0.001). In-vitro experiments using near-diploid and near-tetraploid lineages derived from a breast cancer cell line supported the hypothesis that DNA-content influences Glucose- and Oxygen dependent proliferation-, death- and migration rates. To model how subpopulations with variable DNA-content compete in the resource limited environment of the human brain we developed a stochastic state space model of the brain (S3MB). The model discretizes the brain into voxels, whereby the state of each voxel is defined by 8+ variables that are updated over time: stiffness, oxygen, phosphate, glucose, vasculature, dead cells, migrating cells and proliferating cells of various DNA content, and treatment conditions such as radiotherapy and chemotherapy. Fokker-Planck partial differential equations govern the distribution of resources and cells across voxels. We applied S3MB on sequencing and imaging data obtained from a primary GBM patient. We performed whole genome sequencing (WGS) of four surgical specimens collected during the 1st and 2nd surgeries of the GBM and used HATCHET to quantify its clonal composition and how it changes between the two surgeries. HATCHET identified two aneuploid subpopulations of ploidy 1.98 and 2.29 respectively. The low-ploidy clone was dominant at the time of the first surgery and became even more dominant upon recurrence. MRI images were available before and after each surgery and registered to MNI space. The S3MB domain was initiated from 4mm^3 voxels of the MNI space. T1 post and T2 flair scans acquired after the 1st surgery informed tumor cell densities per voxel. Magnetic Resonance Elastography scans and PET/CT scans informed stiffness and Glucose access per voxel. We performed a parameter search to recapitulate the GBM’s tumor cell density and ploidy composition before the 2nd surgery. Results suggest that the high-ploidy subpopulation had a higher Glucose-dependent proliferation rate, but a lower Glucose-dependent death rate, resulting in spatial differences in the distribution of the two subpopulations. Understanding how genomics and microenvironments interact to shape cell fate decisions can help pave the way to therapeutic strategies that mimic prognostically favorable microenvironments.
  • Paul Macklin Indiana University
    "A new grammar for real-time and reproducible modeling of multicellular interactions in cancer"
  • Cancer is driven by complex and multiscale interactions among a community of malignant epithelial, stromal, and immune cells. Within these cancer ecosystems, cells exchange a vast array of chemical and physical signals to drive behavioral responses. Agent-based models (ABMs)--which simulate individual cells as software agents with independent states and “rules” that represent how biophysical cues drive their behaviors--are frequently used to simulate and explore these complex systems, but creating these models requires expertise in both biology and computational modeling. ABMs generally require modelers to (1) formulate biological hypotheses on how biophysical signals drive individual behavioral responses, (2) transform these biological hypotheses into mathematics, and (3) implement these mathematical statements into computational code in C++, Python, Java, or other languages. Development of robust models requires close collaboration between computational scientists and experimental and clinical scientists with biological domain expertise. To facilitate collaborative development and assessment of computational models through modeling loops, we have designed a new modeling approach which encourages interaction between teams of multidisciplinary scientists. We present a new modeling grammar based on interpretable hypotheses of how chemical and physical signals affect cell behavior. (e.g., “In M0 macrophages, necrotic cell debris increases transformation to M1 macrophages. In malignant epithelial cells, doxorubicin increases apoptosis.”) These hypotheses statements are directly mapped onto mathematical response functions to automatically generate model code in real time. In our approach, new and prior model hypotheses are automatically merged and annotated for reproducibility. Moreover, we can combine knowledge-based rules with data-driven rules learned through high-throughput genomics, text mining, and other techniques. When combined with a graphical model editor, multidisciplinary teams can formulate, run, visualize, and refine complex cancer models in real time, accelerating the modeling loop from months to minutes. We will demonstrate the grammar with examples in cancer hypoxia, cancer-immune interactions, and response of cancer and immune cells to therapeutic compounds. We will give a live demonstration to develop, run, and refine a cancer model on-the-fly, and discuss implications of this approach for reproducible team science in multicellular cancer systems biology.

OTHE Subgroup Contributed Talks

  • Caleb Mayer University of Michigan
    "Mathematical Modeling of Circadian Rhythms Across Populations with Consumer-Grade Wearable Data"
  • With the rise of wearable technology in recent years, large-scale collections of physiological and behavioral data have become readily available for analysis. Mathematical models can effectively process these increasingly accessible signals, such as heart rate and activity, into information about the circadian rhythm of an individual. One subset of these models predicts circadian phase (often via a limit-cycle oscillator framework) based on the light and/or activity pattern of an individual, while another recently developed method extracts heart rate phase and other information by accounting for circadian variation, the impact of activity, and effects due to other physiological processes. In this talk, we adapt these modeling approaches to examine circadian features across populations, including students, cancer patients, and individuals with COVID-19. We discuss large-scale differences across populations and implications of the models for reducing fatigue and tracking disease progression, using a combination of synthetic and real data in order to test algorithm performance. We develop a real-time anomaly detection framework for identifying abnormal changes such as fever in oscillatory physiological signals. Finally, we explore parameter differences across populations and the effects of parameter perturbations on the models.
  • Edward J Hancock The University of Sydney
    "Modelling the Synchronization of the Membrane and Calcium Clocks in Lymphatic Muscle Cells"
  • Lymphoedema is a common dysfunction of the lymphatic system that results in fluid accumulating between cells. Fluid return through the lymphatic system is primarily provided by contractions of muscle cells in the walls of lymphatic vessels, driven by oscillations in the cell’s membrane potential (M-clock) and calcium ion concentration (C-clock). However, there is an incomplete understanding of the mechanisms involved in these oscillations, restricting the development of pharmacological treatments for dysfunction. Previously, we proposed a minimal model where self-driven oscillations in M-clock drove passive oscillations in the C-clock. Here, we extend this model to the case where the M-clock and the C-clock oscillators are both self-driven and coupled together. The synchronized behaviour enables the model to match experimental M-clock waveforms for both wild-type and knock-out data, resolving issues with previous models. We also use phase-plane analysis to explain the dual-clock coupling. The model has the potential to help determine mechanisms and find targets for pharmacological treatment of lymphoedema.
  • Haniyeh Fattahpour Georgia State University
    "Mathematical modeling of cell growth in a tissue engineering scaffold pore"
  • A tissue engineering scaffold is composed of pores lined with cells that allow nutrient-rich culture medium to pass through, thereby encouraging the proliferation of cells. Growth of the tissue is affected by a number of factors, including the flow rate and concentration of nutrients in the feed, the elasticity of the scaffold, and the properties of the cells. Many studies have examined these factors separately, but in this project, we aim to examine all of them together. In this work, we (i) develop a mathematical model that describes the dynamics and concentration of nutrients, scaffold elasticity, and cell proliferation; (ii) solve the model and simulate the cell proliferation process; (iii) develop a reverse algorithm that determines the initial scaffold pore configuration based on the desired geometry of the final tissue.
  • Yue Wu Stanford University
    "Classify dynamic responses to food through time-series glucose data"
  • Monitoring of health status is complex, and the common approach focuses on one-time measurement of clinical markers during a visit with a healthcare professional. This is not ideal as the physiological states are proven to change dynamically in response to daily activities (e.g., eating and exercise). For type 2 diabetes and prediabetes patients, continuous glucose monitoring (CGM), as well as monitoring of physical activities and sleep provide important information for individualized intervention suggestions. Here, we incorporated the novel and noisy real-life CGM measurements to evaluate the personal responses to 7 different types of carbohydrates and 3 different nutrients (fat, protein, and fiber) that tend to mitigate glucose spikes individually in 43 participants. We built a computational workflow to clean the data, removed errors from manual records and time zone shifts in traveling, and provided quality control for future data. We then viewed the data through smoothing, dimensionality reduction, and clustering. We discovered consistent separating patterns from common carbohydrates, with rice and beans being the two extremes. We discovered interesting individual responses to the additional mitigators, which indicate the importance of personalized intervention in diabetes.

Organizing committee
  • Laura Kubatko, chair
  • Adriana Dawes
  • Mary Ann Horn
  • Janet Best
  • Adrian Lam
  • Grzegorz Rempala
  • Will Gehring
Scientific organizing committee
  • Adriana Dawes
  • Mary Ann Horn
  • Jane Heffernan
  • Hayriye Gulbudak
  • Jeffrey West
SMB 2023 is being held on the campus of The Ohio State University. As visitors to campus, all SMB participants must follow The Ohio State University Policy on Non-Discrimination, Harassment, and Sexual Misconduct.