PLOS Computational Biology
ISSN / EISSN : 1553734X / 15537358
Current Publisher: Public Library of Science (PLoS) (10.1371)
Total articles ≅ 7,109
Google Scholar h5-index: 79
Latest articles in this journal
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007316
Abstract:Predicting future brain signal is highly sought-after, yet difficult to achieve. To predict the future phase of cortical activity at localized ECoG and MEG recording sites, we exploit its predominant, large-scale, spatiotemporal dynamics. The dynamics are extracted from the brain signal through Fourier analysis and principal components analysis (PCA) only, and cast in a data model that predicts future signal at each site and frequency of interest. The dominant eigenvectors of the PCA that map the large-scale patterns of past cortical phase to future ones take the form of smoothly propagating waves over the entire measurement array. In ECoG data from 3 subjects and MEG data from 20 subjects collected during a self-initiated motor task, mean phase prediction errors were as low as 0.5 radians at local sites, surpassing state-of-the-art methods of within-time-series or event-related models. Prediction accuracy was highest in delta to beta bands, depending on the subject, was more accurate during episodes of high global power, but was not strongly dependent on the time-course of the task. Prediction results did not require past data from the to-be-predicted site. Rather, best accuracy depended on the availability in the model of long wavelength information. The utility of large-scale, low spatial frequency traveling waves in predicting future phase activity at local sites allows estimation of the error introduced by failing to account for irreducible trajectories in the activity dynamics. Prediction is an important step in scientific progress, often leading to real-world applications. Prediction of future brain activity could lead to improvements in detecting driver and pilot error or real-time brain testing using transcranial magnetic stimulation. Previous studies have either supposed that the ‘noise’ level in the cortex is high, setting the prediction bar rather low; or used localized measurements to predict future activity, with modest success. A long-held but controversial hypothesis is that the cortex is best characterized as a multi-scale dynamic structure, in which the flow of activity at one scale, say, the area responsible for motor control, is inextricably tied to activity at smaller and larger scales, for example within a cortical column and the whole cortex. We test this hypothesis by analyzing large-scale traveling waves of cortical activity. Like waves arriving on a beach, the ongoing wave motion allows better prediction of future activity compared to monitoring the local rise and fall; in the best cases the future wave cycle is predicted with as low as 20° average error angle. The prediction techniques developed for the present research rely on mathematics related to quantifying large-scale weather patterns or analysis of fluid dynamics.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007497
Abstract:Organisms must ensure that expression of genes is directed to the appropriate tissues at the correct times, while simultaneously ensuring that these gene regulatory systems are robust to perturbation. This idea is captured by a mathematical concept called r-robustness, which says that a system is robust to a perturbation in up to r − 1 randomly chosen parameters. r-robustness implies that the biological system has a small number of sensitive parameters and that this number can be used as a robustness measure. In this work we use this idea to investigate the robustness of gene regulation using a sequence level model of the Drosophila melanogaster gene even-skipped. We consider robustness with respect to mutations of the enhancer sequence and with respect to changes of the transcription factor concentrations. We find that gene regulation is r-robust with respect to mutations in the enhancer sequence and identify a number of sensitive nucleotides. In both natural and in silico predicted enhancers, the number of nucleotides that are sensitive to mutation correlates negatively with the length of the sequence, meaning that longer sequences are more robust. The exact degree of robustness obtained is dependent not only on DNA sequence, but also on the local concentration of regulatory factors. We find that gene regulation can be remarkably sensitive to changes in transcription factor concentrations at the boundaries of expression features, while it is robust to perturbation elsewhere. Robustness assures that organisms can survive when faced with unpredictable environments or genetic mutations. In this work, we characterize the robustness of gene regulation using an experimentally validated model of the regulation of the Drosophila gene even-skipped. We use a mathematically precise definition of robustness that allows us to make quantitative comparisons of robustness between different genetic sequences or between different nuclei. From this analysis, we found that genetic sequences that were not previously known to be important for gene regulation reduce sensitivity to genetic perturbation. In contrast, we found that gene regulation can be very sensitive to the concentrations of regulators. This extreme sensitivity was only observed at the boundaries of expression features, where switch-like behavior is desirable. This highlights the importance of considering context when assessing robustness.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007397
Abstract:Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007476
Abstract:In many sensory systems the neural signal is coded by the coordinated response of heterogeneous populations of neurons. What computational benefit does this diversity confer on information processing? We derive an efficient coding framework assuming that neurons have evolved to communicate signals optimally given natural stimulus statistics and metabolic constraints. Incorporating nonlinearities and realistic noise, we study optimal population coding of the same sensory variable using two measures: maximizing the mutual information between stimuli and responses, and minimizing the error incurred by the optimal linear decoder of responses. Our theory is applied to a commonly observed splitting of sensory neurons into ON and OFF that signal stimulus increases or decreases, and to populations of monotonically increasing responses of the same type, ON. Depending on the optimality measure, we make different predictions about how to optimally split a population into ON and OFF, and how to allocate the firing thresholds of individual neurons given realistic stimulus distributions and noise, which accord with certain biases observed experimentally. The brain processes external stimuli through special receptor cells and associated sensory circuits. In many sensory systems the population of neurons splits into ON and OFF cells, namely cells that signal an increase vs. a decrease of the sensory variable. This happens in brains from worm to man, and in the sensing of temperature, odor, light, and sound. Here we analyze the possible benefits of “pathway splitting” using information theory. We derive the most efficient split of a pathway into ON and OFF neurons and predict the response range of each neuron type as a function of noise and stimulus statistics. Our theory offers insight into this ubiquitous phenomenon of neural organization and suggests new experiments in diverse sensory systems.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007268
Abstract:Origin and functions of intermittent transitions among sleep stages, including short awakenings and arousals, constitute a challenge to the current homeostatic framework for sleep regulation, focusing on factors modulating sleep over large time scales. Here we propose that the complex micro-architecture characterizing the sleep-wake cycle results from an underlying non-equilibrium critical dynamics, bridging collective behaviors across spatio-temporal scales. We investigate θ and δ wave dynamics in control rats and in rats with lesions of sleep-promoting neurons in the parafacial zone. We demonstrate that intermittent bursts in θ and δ rhythms exhibit a complex temporal organization, with long-range power-law correlations and a robust duality of power law (θ-bursts, active phase) and exponential-like (δ-bursts, quiescent phase) duration distributions, typical features of non-equilibrium systems self-organizing at criticality. Crucially, such temporal organization relates to anti-correlated coupling between θ- and δ-bursts, and is independent of the dominant physiologic state and lesions, a solid indication of a basic principle in sleep dynamics. Sleep exhibits intermittent transitions among sleep stages and short awakenings, with continuous fluctuations within stages that trigger micro-states and brief arousals. Despite the established association between dominant brain rhythms and physiologic states, the nature and dynamics of sleep-wake and sleep-stage transitions remain not understood. Homeostatic models of sleep regulation at ultradian and circadian scales do not address empirical observations of spontaneous transitions in sleep micro-architecture, and do not account for the emergent complex structure of sleep stages and arousals, and the related dynamics of bursts in cortical rhythms. Empirical observations of intrinsic bursts in cortical activity, and corresponding intermittent transitions in sleep micro-architecture, raise the hypothesis that non-equilibrium critical dynamics underlie sleep regulation at short time scales. We analyze θ and δ cortical rhythms in control rats and rats with lesions in the parafacial zone, which plays a significant role in the regulation of slow-wave sleep. The results demonstrate that critical dynamics underlie cortical activation during sleep and wake, and lay the foundation for a new paradigm, considering sleep micro-architecture as result of a non-equilibrium process and self-organization among neuronal assemblies to maintain a critical state, in contrast to the homeostasis paradigm of sleep regulation at large time scales.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007443
Abstract:Human decisions can be habitual or goal-directed, also known as model-free (MF) or model-based (MB) control. Previous work suggests that the balance between the two decision systems is impaired in psychiatric disorders such as compulsion and addiction, via overreliance on MF control. However, little is known whether the balance can be altered through task training. Here, 20 healthy participants performed a well-established two-step task that differentiates MB from MF control, across five training sessions. We used computational modelling and functional near-infrared spectroscopy to assess changes in decision-making and brain hemodynamic over time. Mixed-effects modelling revealed overall no substantial changes in MF and MB behavior across training. Although our behavioral and brain findings show task-induced changes in learning rates, these parameters have no direct relation to either MF or MB control or the balance between the two systems, and thus do not support the assumption of training effects on MF or MB strategies. Our findings indicate that training on the two-step paradigm in its current form does not support a shift in the balance between MF and MB control. We discuss these results with respect to implications for restoring the balance between MF and MB control in psychiatric conditions. Psychiatric conditions such as compulsion or addiction are associated with an overreliance on habitual, or model-free, decision-making. Goal-directed, or model-based, decision-making may protect against such overreliance. We therefore asked whether model-free control could be reduced, and model-based control strengthened, via task training. We used the well-characterized two-step task that differentiates model-based from model-free actions. Our results suggest that training on the current form of the two-step task does not support a shift in the balance between model-free and model-based strategies. Factors such as devaluation, demotivation or automatization during training may play a role in the missing emergence of a training effect. Future studies could adapt the two-step task so as to separate such factors from decision-making strategies.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007488
Abstract:Modeling cell differentiation from omics data is an essential problem in systems biology research. Although many algorithms have been established to analyze scRNA-seq data, approaches to infer the pseudo-time of cells or quantify their potency have not yet been satisfactorily solved. Here, we propose the Landscape of Differentiation Dynamics (LDD) method, which calculates cell potentials and constructs their differentiation landscape by a continuous birth-death process from scRNA-seq data. From the viewpoint of stochastic dynamics, we exploited the features of the differentiation process and quantified the differentiation landscape based on the source-sink diffusion process. In comparison with other scRNA-seq methods in seven benchmark datasets, we found that LDD could accurately and efficiently build the evolution tree of cells with pseudo-time, in particular quantifying their differentiation landscape in terms of potency. This study provides not only a computational tool to quantify cell potency or the Waddington potential landscape based on scRNA-seq data, but also novel insights to understand the cell differentiation process from a dynamic perspective. Quantifying the Waddington landscape of cell differentiation from high throughput data is a challenging problem in systems biology and biophysics. Here, we propose a theoretical method named LDD (Landscape of Differentiation Dynamics), which builds cell potentials and constructs their differentiation landscape by a continuous birth-death process from scRNA-seq data. This method well exploits the dynamical features of the differentiation process, thus quantifying the differentiation landscape in an accurate manner. We show that LDD can accurately and efficiently build the evolution tree of cells with pseudo-time, in particular quantifying their differentiation landscape in terms of potency. Taken together, this study provides not only a computational tool to quantify cell potency based on scRNA-seq data, but also a theoretical approach to understand the cell differentiation process from a dynamic perspective.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007426
Abstract:Selective sweeps, the genetic footprint of positive selection, have been extensively studied in the past decades, with dozens of methods developed to identify swept regions. However, these methods suffer from both false positive and false negative reports, and the candidates identified with different methods are often inconsistent with each other. We propose that a biological cause of this problem can be population subdivision, and a technical cause can be incomplete, or inaccurate, modeling of the dynamic process associated with sweeps. Here we used simulations to show how these effects interact and potentially cause bias. In particular, we show that sweeps maybe misclassified as either hard or soft, when the true time stage of a sweep and that implied, or pre-supposed, by the model do not match. We call this “temporal misclassification”. Similarly, “spatial misclassification (softening)” can occur when hard sweeps, which are imported by migration into a new subpopulation, are falsely identified as soft. This can easily happen in case of local adaptation, i.e. when the sweeping allele is not under positive selection in the new subpopulation, and the underlying model assumes panmixis instead of substructure. The claim that most sweeps in the evolutionary history of humans were soft, may have to be reconsidered in the light of these findings. Identifying the traces of adaptive evolution is still difficult, in particular when populations are not in equilibrium. Using forward-in-time simulations, we studied adaptation by selective sweeps in populations that are divided into demes with limited migration among them. We applied different sweep tests, whose sensitivities are found to vary widely across demographic scenarios and temporal stages. First, the temporal stage of a sweep (ongoing vs completed) significantly affects detection, especially when machine learning algorithms are used and training and test stages do not match. Second, imported alleles from a neighboring deme with local adaptation can result in spurious sweep signals. In both cases, signals are often detected as “soft sweeps” (adaptation from standing variation) while in fact they are “hard sweeps” (adaptation from single mutation), originating in the same subpopulation in the former case and in some other subpopulation in the latter case. We call these phenomena “temporal” and “spatial softening”. Finally, under low migration, the time window in which a sweep can be detected becomes very narrow, and power tends to be low. Generally, however, haplotype-based methods seem to be less affected than frequency-spectrum-based tests.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1006668
Abstract:The titre of virus in a dengue patient and the duration of this viraemia has a profound effect on whether or not a mosquito will become infected when it feeds on the patient and this, in turn, is a key driver of the magnitude of a dengue outbreak. The assessment of the heterogeneity of viral dynamics in dengue-infected patients and its precise treatment are still uncertain. Infection onset, patient physiology and immune response are thought to play major roles in the development of the viral load. Research has explored the interference and spontaneous generation of defective virus particles, but have not examined both the antibody and defective particles during natural infection. We explore the intrinsic variability in the within-host dynamics of viraemias for a population of patients using the method of population of models (POMs). A dataset from 208 patients is used to initially calibrate 20,000 models for the infection kinetics for each of the four dengue virus serotypes. The calibrated POMs suggests that naturally generated defective particles may interfere with the viraemia, but the generated defective virus particles are not adequate to reduce high fever and viraemia duration. The effect of adding excess defective dengue virus interfering particles to patients as a therapeutic is evaluated using the calibrated POMs in a bang-bang (on-off or two-step) optimal control setting. Bang-bang control is a class of binary feedback control that turns either ‘ON’ or ‘OFF’ at different time points, determined by the system feedback. Here, the bang-bang control estimates the mathematically optimal dose and duration of the intervention for each model in the POM set. Dengue virions with deletions or defects in their genomes can be recovered from dengue patients. These defective viruses can only replicate with the assistance of fully functional viruses and they reduce the yield of the fully functional viruses. They are known as defective interfering (DI) particles. By administering additional, defined, DI particles to patients it may be possible to reduce the titre and duration of their viraemia. This, in turn may reduce the severity of the disease and the likelihood that the dengue virus will be passed from the patient to a mosquito vector. This study estimates the number of DI particles that would need to be administered, and over what period, to have a significant effect on patient viraemia and subsequent dengue fever severity.
PLOS Computational Biology, Volume 15; doi:10.1371/journal.pcbi.1007451
Abstract:Cancer is driven by genetic mutations that dysregulate pathways important for proper cell function. Therefore, discovering these cancer pathways and their dysregulation order is key to understanding and treating cancer. However, the heterogeneity of mutations between different individuals makes this challenging and requires that cancer progression is studied in a subtype-specific way. To address this challenge, we provide a mathematical model, called Subtype-specific Pathway Linear Progression Model (SPM), that simultaneously captures cancer subtypes and pathways and order of dysregulation of the pathways within each subtype. Experiments with synthetic data indicate the robustness of SPM to problem specifics including noise compared to an existing method. Moreover, experimental results on glioblastoma multiforme and colorectal adenocarcinoma show the consistency of SPM’s results with the existing knowledge and its superiority to an existing method in certain cases. The implementation of our method is available at https://github.com/Dalton386/SPM. Different biological processes within a cell are performed through biological pathways. A biological pathway consists of a group of proteins and other molecules and complex interactions between them. It is known that cancer arises due to malfunction, also known as dysregulation, of one or more pathways. Interestingly, a dysregulation in a patient is often caused by mutations in only one (and not more) molecule in the pathway. This phenomenon is known as mutual exclusivity of mutations and can be used for identification of groups of genes forming (cancer) pathways. The same type of cancer in different patients can result due to different trajectories of dysregulations in possibly different pathways resulting in cancer heterogeneity. Cancer heterogeneity implies that cancer treatment should be personalized according to each patient’s specific characteristics and mutations. Therefore, grouping patients based on their pathway dysregulation trajectories into cancer subtypes can help identify different cancer mechanisms, inform subtype-specific treatment strategies and improve efficacy. In this paper, we provide a method that uses patients’ mutation information captured by DNA sequencing and identifies dysregulated pathways (i.e. molecules involved in each cancer pathway), cancer subtypes (i.e. groups of patients sharing a common pathway dysregulation trajectory) and subtype-specific pathway dysregulation orders (i.e. trajectories defining the different subtypes). The results on synthetic and real-world data indicate that the method can recover meaningful information about the progression of cancer in different groups of patients.