PLoS Computational Biology
ISSN / EISSN : 1553734X / 15537358
Current Publisher: Public Library of Science (PLoS) (10.1371)
Total articles ≅ 7,616
Google Scholar h5-index: 79
Latest articles in this journal
PLOS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008081
We rarely experience difficulty picking up objects, yet of all potential contact points on the surface, only a small proportion yield effective grasps. Here, we present extensive behavioral data alongside a normative model that correctly predicts human precision grasping of unfamiliar 3D objects. We tracked participants’ forefinger and thumb as they picked up objects of 10 wood and brass cubes configured to tease apart effects of shape, weight, orientation, and mass distribution. Grasps were highly systematic and consistent across repetitions and participants. We employed these data to construct a model which combines five cost functions related to force closure, torque, natural grasp axis, grasp aperture, and visibility. Even without free parameters, the model predicts individual grasps almost as well as different individuals predict one another’s, but fitting weights reveals the relative importance of the different constraints. The model also accurately predicts human grasps on novel 3D-printed objects with more naturalistic geometries and is robust to perturbations in its key parameters. Together, the findings provide a unified account of how we successfully grasp objects of different 3D shape, orientation, mass, and mass distribution. A model based on extensive behavioral data unifies the varied and fragmented literature on human grasp selection by correctly predicting human grasps across a wide variety of conditions.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1007983
Many large-scale functional connectivity studies have emphasized the importance of communication through increased inter-region correlations during task states. In contrast, local circuit studies have demonstrated that task states primarily reduce correlations among pairs of neurons, likely enhancing their information coding by suppressing shared spontaneous activity. Here we sought to adjudicate between these conflicting perspectives, assessing whether co-active brain regions during task states tend to increase or decrease their correlations. We found that variability and correlations primarily decrease across a variety of cortical regions in two highly distinct data sets: non-human primate spiking data and human functional magnetic resonance imaging data. Moreover, this observed variability and correlation reduction was accompanied by an overall increase in dimensionality (reflecting less information redundancy) during task states, suggesting that decreased correlations increased information coding capacity. We further found in both spiking and neural mass computational models that task-evoked activity increased the stability around a stable attractor, globally quenching neural variability and correlations. Together, our results provide an integrative mechanistic account that encompasses measures of large-scale neural activity, variability, and correlations during resting and task states. Statistical estimates of correlated neural activity and variability are widely used to characterize neural systems during different states. However, there is a conceptual gap between the use and interpretation of these measures between the human neuroimaging and non-human primate electrophysiology literature. For example, in the human neuroimaging literature, “functional connectivity” is often used to refer to correlated activity, while in the non-human primate electrophysiology literature, the equivalent term is “noise correlation”. In an effort to unify these two perspectives under a single theoretical framework, we provide empirical evidence from human functional magnetic resonance imaging and non-human primate mean-field spike rate data that functional connectivity and noise correlations reveal similar statistical patterns during task states. In short, we found that task states primarily quench neural variability and correlations in both data sets. To provide a theoretically rigorous account capable of explaining this phenomena across both data sets, we use mean-field dynamical systems modeling to demonstrate the deterministic relationship between task-evoked activity, neural variability and correlations. Together, we provide an integrative account, showing that task-evoked activity quenches neural variability and correlations in large-scale neural systems.
PLOS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008041
Hypoxia-activated prodrugs (HAPs) present a conceptually elegant approach to not only overcome, but better yet, exploit intra-tumoural hypoxia. Despite being successful in vitro and in vivo, HAPs are yet to achieve successful results in clinical settings. It has been hypothesised that this lack of clinical success can, in part, be explained by the insufficiently stringent clinical screening selection of determining which tumours are suitable for HAP treatments. Taking a mathematical modelling approach, we investigate how tumour properties and HAP-radiation scheduling influence treatment outcomes in simulated tumours. The following key results are demonstrated in silico: (i) HAP and ionising radiation (IR) monotherapies may attack tumours in dissimilar, and complementary, ways. (ii) HAP-IR scheduling may impact treatment efficacy. (iii) HAPs may function as IR treatment intensifiers. (iv) The spatio-temporal intra-tumoural oxygen landscape may impact HAP efficacy. Our in silico framework is based on an on-lattice, hybrid, multiscale cellular automaton spanning three spatial dimensions. The mathematical model for tumour spheroid growth is parameterised by multicellular tumour spheroid (MCTS) data. When cancer patients present with solid tumours, the tumours often contain regions that are oxygen-deprived or, in other words, hypoxic. Hypoxic tumour regions are more resistant to conventional anti-cancer therapies, such as chemotherapy and radiotherapy, and therefore tumour hypoxia may complicate treatments. Hypoxia-activated prodrugs constitute a conceptually elegant approach to not only overcome, but better yet, exploit tumour hypoxia. Hypoxia-activated prodrugs are drugs that act as Trojan horses, they are theoretically harmless vehicles that are converted into warheads when they reach their targets: hypoxic tumour regions. Despite being conceptually clever and successful in experimental settings, hypoxia-activated prodrugs are yet to achieve successful results in clinical trials. It has been hypothesised that this lack of clinical success can, in part, be explained by an insufficiently stringent clinical screening selection of determining which tumours are suitable for hypoxia-activated prodrug treatments. In this article, we investigate how simulated tumours with different oxygen landscapes respond to anti-cancer treatments that include hypoxia-activated prodrugs, either alone or in combination with radiotherapy. Our simulation framework is based on a mathematical model that describes how individual cancer cells in a tumour divide and respond to treatments. We demonstrate that the efficacy of hypoxia-activated prodrugs depends on both the treatment scheduling and the oxygen landscapes of the simulated tumours.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008080
Neural computation is determined by neurons’ dynamics and circuit connectivity. Uncertain and dynamic environments may require neural hardware to adapt to different computational tasks, each requiring different connectivity configurations. At the same time, connectivity is subject to a variety of constraints, placing limits on the possible computations a given neural circuit can perform. Here we examine the hypothesis that the organization of neural circuitry favors computational flexibility: that it makes many computational solutions available, given physiological constraints. From this hypothesis, we develop models of connectivity degree distributions based on constraints on a neuron’s total synaptic weight. To test these models, we examine reconstructions of the mushroom bodies from the first instar larva and adult Drosophila melanogaster. We perform a Bayesian model comparison for two constraint models and a random wiring null model. Overall, we find that flexibility under a homeostatically fixed total synaptic weight describes Kenyon cell connectivity better than other models, suggesting a principle shaping the apparently random structure of Kenyon cell wiring. Furthermore, we find evidence that larval Kenyon cells are more flexible earlier in development, suggesting a mechanism whereby neural circuits begin as flexible systems that develop into specialized computational circuits. High-throughput electron microscopic anatomical experiments have begun to yield detailed maps of neural circuit connectivity. Uncovering the principles that govern these circuit structures is a major challenge for systems neuroscience. Healthy neural circuits must be able to perform computational tasks while satisfying physiological constraints. Those constraints can restrict a neuron’s possible connectivity, and thus potentially restrict its computation. Here we examine simple models of constraints on total synaptic weights, and calculate the number of circuit configurations they allow: a simple measure of their computational flexibility. We propose probabilistic models of connectivity that weight the number of synaptic partners according to computational flexibility under a constraint and test them using recent wiring diagrams from a learning center, the mushroom body, in the fly brain. We compare constraints that fix or bound a neuron’s total connection strength to a simple random wiring null model. Of these models, the fixed total connection strength matched the overall connectivity best in mushroom bodies from both larval and adult flies. We also provide evidence suggesting that neural circuits are more flexible in early stages of development and lose this flexibility as they grow towards specialized function.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008076
We consider how a signalling system can act as an information hub by multiplexing information arising from multiple signals. We formally define multiplexing, mathematically characterise which systems can multiplex and how well they can do it. While the results of this paper are theoretical, to motivate the idea of multiplexing, we provide experimental evidence that tentatively suggests that the NF-κB transcription factor can multiplex information about changes in multiple signals. We believe that our theoretical results may resolve the apparent paradox of how a system like NF-κB that regulates cell fate and inflammatory signalling in response to diverse stimuli can appear to have the low information carrying capacity suggested by recent studies on scalar signals. In carrying out our study, we introduce new methods for the analysis of large, nonlinear stochastic dynamic models, and develop computational algorithms that facilitate the calculation of fundamental constructs of information theory such as Kullback–Leibler divergences and sensitivity matrices, and link these methods to new theory about multiplexing information. We show that many current models such as those of the NF-κB system cannot multiplex effectively and provide models that overcome this limitation using post-transcriptional modifications. Cells use signalling systems to pass on information arising from their ever-changing environment to their processing units. These biochemical networks regulate the transmission of multiple signals within the noisy and complex cellular environment, controlling whether to turn on or off processes of cell defence, death, division, and others. The question of how they actually achieve that becomes particularly critical given that many diseases occur when signalling systems malfunction. In this paper, we develop methodology and computational tools for simulating, measuring and analysing the ability of signalling systems to transmit multi-dimensional signals. We specifically focus on the capacity of signalling systems to simultaneously transmit multiple signals, such as temperature changes, presence and concentration of cytokines, viral and bacterial pathogens or drugs, through a single noisy, dynamic signalling system. We argue that a signalling system can act as an information hub, sending information in a multiplexed fashion rather similar to the way in which telecommunications networks send multiple signals over a shared medium by combining them into one.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008037
Mass production and use of antibiotics has led to the rise of resistant bacteria, a problem possibly exacerbated by inappropriate and non-optimal application. Antibiotic treatment often follows fixed-dose regimens, with a standard dose of antibiotic administered equally spaced in time. But are such fixed-dose regimens optimal or can alternative regimens be designed to increase efficacy? Yet, few mathematical models have aimed to identify optimal treatments based on biological data of infections inside a living host. In addition, assumptions to make the mathematical models analytically tractable limit the search space of possible treatment regimens (e.g. to fixed-dose treatments). Here, we aimed to address these limitations by using experiments in a Galleria mellonella (insect) model of bacterial infection, to create a fully parametrised mathematical model of a systemic Vibrio infection. We successfully validated this model with biological experiments, including treatments unseen by the mathematical model. Then, by applying artificial intelligence, this model was used to determine optimal antibiotic dosage regimens to treat the host to maximise survival while minimising total antibiotic used. As expected, host survival increased as total quantity of antibiotic applied during the course of treatment increased. However, many of the optimal regimens tended to follow a large initial ‘loading’ dose followed by doses of incremental reductions in antibiotic quantity (dose ‘tapering’). Moreover, application of the entire antibiotic in a single dose at the start of treatment was never optimal, except when the total quantity of antibiotic was very low. Importantly, the range of optimal regimens identified was broad enough to allow the antibiotic prescriber to choose a regimen based on additional criteria or preferences. Our findings demonstrate the utility of an insect host to model antibiotic therapies in vivo and the approach lays a foundation for future regimen optimisation for patient and societal benefits. Research into optimal antibiotic use to improve efficacy is far behind that of cancer care, where personalised treatment is common. The integration of mathematical models with biological observations offers hope to optimise antibiotic use, and in this present study an in vivo insect model of systemic Vibrio infection was used for the first time to determine critical model parameters for optimal antibiotic treatment. By this approach, the optimal regimens tended to result from a large initial ‘loading’ dose followed by subsequent doses of incremental reductions in antibiotic quantity (dose ‘tapering’). The approach and findings of this study opens a new avenue towards optimal application of our precious antibiotic arsenal and may lead to more effective treatment regimens for patients, thus reducing the health and economic burdens associated with bacterial infections. Importantly, it can be argued that until we understand how to use a single antibiotic optimally, it is unlikely we will identify optimal ways to use multiple antibiotics simultaneously.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1007686
The capability of cortical regions to flexibly sustain an “ignited” state of activity has been discussed in relation to conscious perception or hierarchical information processing. Here, we investigate how the intrinsic propensity of different regions to get ignited is determined by the specific topological organisation of the structural connectome. More specifically, we simulated the resting-state dynamics of mean-field whole-brain models and assessed how dynamic multi-stability and ignition differ between a reference model embedding a realistic human connectome, and alternative models based on a variety of randomised connectome ensembles. We found that the strength of global excitation needed to first trigger ignition in a subset of regions is substantially smaller for the model embedding the empirical human connectome. Furthermore, when increasing the strength of excitation, the propagation of ignition outside of this initial core–which is able to self-sustain its high activity–is way more gradual than for any of the randomised connectomes, allowing for graded control of the number of ignited regions. We explain both these assets in terms of the exceptional weighted core-shell organisation of the empirical connectome, speculating that this topology of human structural connectivity may be attuned to support enhanced ignition dynamics. The activity of the cortex in mammals constantly fluctuates in relation to cognitive tasks, but also during rest. The ability of brain regions to display ignition, a fast transition from low to high activity is central for the emergence of conscious perception and decision making. Here, using a biophysically inspired model of cortical activity, we show how the structural organization of human cortex supports and constrains the rise of this ignited dynamics in spontaneous cortical activity. We found that the weighted core-shell organization of the human connectome allows for a uniquely graded ignition. This graded ignition implies a smooth control of the ignition in cortical areas tuned by the global excitability. The smooth control cannot be replicated by surrogate connectomes, even though they conserve key local or global network properties. Indeed, ignition in the human cortex is first triggered on the strongest and most densely interconnected cortical areas–the “ignition core”–, emerging at the lowest global excitability value compared to surrogate connectomes. Finally, we suggest developmental and evolutionary constraints of the mesoscale organization that support this enhanced ignition dynamics in cortical activity.
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008104
High levels of heterozygosity present a unique genome assembly challenge and can adversely impact downstream analyses, yet is common in sequencing datasets obtained from non-model organisms. Here we show that by re-assembling a heterozygous dataset with variant parameters and different assembly algorithms, we are able to generate assemblies whose protein annotations are statistically enriched for specific gene ontology categories. While total assembly length was not significantly affected by assembly methodologies tested, the assemblies generated varied widely in fragmentation level and we show local assembly collapse or expansion underlying the enrichment or depletion of specific protein functional groups. We show that these statistically significant deviations in gene ontology groups can occur in seemingly high-quality assemblies, and result from difficult-to-detect local sequence expansion or contractions. Given the unpredictable interplay between assembly algorithm, parameter, and biological sequence data heterozygosity, we highlight the need for better measures of assembly quality than N50 value, including methods for assessing local expansion and collapse. In the genomic era, genomes must be reconstructed from fragments using computational methods, or assemblers. How do we know that a new genome assembly is correct? This is important because errors in assembly can lead to downstream problems in gene predictions and these inaccurate results can contaminate databases, affecting later comparative studies. A particular challenge occurs when a diploid organism inherits two highly divergent genome copies from its parents. While it is widely appreciated that this type of data is difficult for assemblers to handle properly, here we show that the process is prone to more errors than previously appreciated. Specifically, we document examples of regional expansion and collapse, affecting downstream gene prediction accuracy, but without changing the overall genome assembly size or other metrics of accuracy. Our results suggest that assembly evaluation methods should be altered to identify whether regional expansions and collapses are present in the genome assembly.
PLOS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008032
PLoS Computational Biology, Volume 16; doi:10.1371/journal.pcbi.1008075
We previously proposed, on theoretical grounds, that the cerebellum must regulate the dimensionality of its neuronal activity during motor learning and control to cope with the low firing frequency of inferior olive neurons, which form one of two major inputs to the cerebellar cortex. Such dimensionality regulation is possible via modulation of electrical coupling through the gap junctions between inferior olive neurons by inhibitory GABAergic synapses. In addition, we previously showed in simulations that intermediate coupling strengths induce chaotic firing of inferior olive neurons and increase their information carrying capacity. However, there is no in vivo experimental data supporting these two theoretical predictions. Here, we computed the levels of synchrony, dimensionality, and chaos of the inferior olive code by analyzing in vivo recordings of Purkinje cell complex spike activity in three different coupling conditions: carbenoxolone (gap junctions blocker), control, and picrotoxin (GABA-A receptor antagonist). To examine the effect of electrical coupling on dimensionality and chaotic dynamics, we first determined the physiological range of effective coupling strengths between inferior olive neurons in the three conditions using a combination of a biophysical network model of the inferior olive and a novel Bayesian model averaging approach. We found that effective coupling co-varied with synchrony and was inversely related to the dimensionality of inferior olive firing dynamics, as measured via a principal component analysis of the spike trains in each condition. Furthermore, for both the model and the data, we found an inverted U-shaped relationship between coupling strengths and complexity entropy, a measure of chaos for spiking neural data. These results are consistent with our hypothesis according to which electrical coupling regulates the dimensionality and the complexity in the inferior olive neurons in order to optimize both motor learning and control of high dimensional motor systems by the cerebellum. Computational theory suggests that the cerebellum must decrease the dimensionality of its neuronal activity to learn and control high dimensional motor systems effectively, while being constrained by the low firing frequency of inferior olive neurons, one of the two major source of input signals to the cerebellum. We previously proposed that the cerebellum adaptively controls the dimensionality of inferior olive firing by adjusting the level of synchrony and that such control is made possible by modulating the electrical coupling strength between inferior olive neurons. Here, we developed a novel method that uses a biophysical model of the inferior olive to accurately estimate the effective coupling strengths between inferior olive neurons from in vivo recordings of spike activity in three different coupling conditions. We found that high coupling strengths induce synchronous firing and decrease the dimensionality of inferior olive firing dynamics. In contrast, intermediate coupling strengths lead to chaotic firing and increase the dimensionality of the firing dynamics. Thus, electrical coupling is a feasible mechanism to control dimensionality and chaotic firing of inferior olive neurons. In sum, our results provide insights into possible mechanisms underlying cerebellar function and, in general, a biologically plausible framework to control the dimensionality of neural coding.