Refine Search

New Search

Results: 37

(searched for: doi:10.1088/0305-4470/39/50/005)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
, Jeffrey A. Hostetler, Bradley M. Stith, Julien Martin
Published: 21 June 2021
Scientific Reports, Volume 11, pp 1-12; https://doi.org/10.1038/s41598-021-92437-z

Abstract:
Imperfect detection is an important problem when counting wildlife, but new technologies such as unmanned aerial systems (UAS) can help overcome this obstacle. We used data collected by a UAS and a Bayesian closed capture-mark-recapture model to estimate abundance and distribution while accounting for imperfect detection of aggregated Florida manatees (Trichechus manatus latirostris) at thermal refuges to assess use of current and new warmwater sources in winter. Our UAS hovered for 10 min and recorded 4 K video over sites in Collier County, FL. Open-source software was used to create recapture histories for 10- and 6-min time periods. Mean estimates of probability of detection for 1-min intervals at each canal varied by survey and ranged between 0.05 and 0.92. Overall, detection probability for sites varied between 0.62 and 1.00 across surveys and length of video (6 and 10 min). Abundance varied by survey and location, and estimates indicated that distribution changed over time, with use of the novel source of warmwater increasing over time. The highest cumulative estimate occurred in the coldest winter, 2018 (N = 158, CI 141–190). Methods here reduced survey costs, increased safety and obtained rigorous abundance estimates at aggregation sites previously too difficult to monitor.
Robert Hartwig, , Joseph Qiu
The Geneva Risk and Insurance Review, Volume 45, pp 134-170; https://doi.org/10.1057/s10713-020-00055-y

The publisher has not yet granted permission to display this abstract.
, Philipp Hennig, Hans‐Peter Wieser, Mark Bangert
Published: 1 August 2020
Medical Physics, Volume 47, pp 5260-5273; https://doi.org/10.1002/mp.14414

Abstract:
Purpose Radiotherapy, especially with charged particles, is sensitive to executional and preparational uncertainties that propagate to uncertainty in dose and plan quality indicators, e. g., dose‐volume histograms (DVHs). Current approaches to quantify and mitigate such uncertainties rely on explicitly computed error scenarios and are thus subject to statistical uncertainty and limitations regarding the underlying uncertainty model. Here we present an alternative, analytical method to approximate moments, in particular expectation value and (co)variance, of the probability distribution of DVH‐points, and evaluate its accuracy on patient data. Methods We use Analytical Probabilistic Modeling (APM) to derive moments of the probability distribution over individual DVH‐points based on the probability distribution over dose. By using the computed moments to parameterize distinct probability distributions over DVH‐points (here normal or beta distributions), not only the moments but also percentiles, i. e., α‐DVHs, are computed. The model is subsequently evaluated on three patient cases (intracranial, paraspinal, prostate) in 30‐ and singlefraction scenarios by assuming the dose to follow a multivariate normal distribution, whose moments are computed in closed‐form with APM. The results are compared to a benchmark based on discrete random sampling. Results The evaluation of the new probabilistic model on the three patient cases against a sampling benchmark proves its correctness under perfect assumptions as well as good agreement in realistic conditions. More precisely, ca. 90% of all computed expected DVH‐points and their standard deviations agree within 1% volume with their empirical counterpart from sampling computations, for both fractionated and single fraction treatments. When computing α‐DVHs, the assumption of a beta distribution achieved better agreement with empirical percentiles than the assumption of a normal distribution: While in both cases probabilities locally showed large deviations (up to ±0.2), the respective α ‐DVHs for α = {0:05; 0:5; 0:95} only showed small deviations in respective volume (up to ±5% volume for a normal distribution, and up to 2% for a beta distribution). A previously published model from literature, which was included for comparison, exhibited substantially larger deviations. Conclusions With APM we could derive a mathematically exact description of moments of probability distributions over DVH‐points given a probability distribution over dose. The model generalizes previous attempts and performs well for both choices of probability distributions, i. e., normal or beta distributions, over DVH‐points.
Shin-Chieng Ngo, Allon G. Percus, , Kristina Lerman
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, Volume 476; https://doi.org/10.1098/rspa.2019.0772

Abstract:
Network topologies can be highly non-trivial, due to the complex underlying behaviours that form them. While past research has shown that some processes on networks may be characterized by local statistics describing nodes and their neighbours, such as degree assortativity, these quantities fail to capture important sources of variation in network structure. We define a property called transsortativity that describes correlations among a node’s neighbours. Transsortativity can be systematically varied, independently of the network’s degree distribution and assortativity. Moreover, it can significantly impact the spread of contagions as well as the perceptions of neighbours, known as the majority illusion. Our work improves our ability to create and analyse more realistic models of complex networks.
, Cody Schroeder, Pat Jackson
Journal of Agricultural, Biological and Environmental Statistics, Volume 25, pp 133-147; https://doi.org/10.1007/s13253-020-00384-5

Abstract:
Methods for estimating juvenile survival of wildlife populations often rely on intensive data collection efforts to capture and uniquely mark individual juveniles and observe them through time. Capturing juveniles in a time frame sufficient to estimate survival can be challenging due to narrow and stochastic windows of opportunity. For many animals, juvenile survival depends on postnatal parental care (e.g., lactating mammals). When a marked adult gives birth to, and provides care for, juvenile animals, investigators can use the adult mark to locate and count unmarked juveniles. Our objective was to leverage the dependency between juveniles and adults and develop a framework for estimating reproductive rates, juvenile survival, and detection probability using repeated observations of marked adult animals with known fates, but imperfect detection probability, and unmarked juveniles with unknown fates. Our methods assume population closure for adults and that no juvenile births or adoptions take place after monitoring has begun. We conducted simulations to evaluate methods and then developed a field study to examine our methods using real data consisting of a population of mule deer in a remote area in central Nevada. Using simulations, we found that our methods were able to recover the true values used to generate the data well. Estimates of juvenile survival rates from our field study were 0.96, (95% CRI 0.83–0.99) for approximately 32-day periods between late June and late August. The methods we describe show promise for many applications and study systems with similar data types, and our methods can be easily extended to unmanned aerial platforms and cameras that are already commercially available for the types of images we used. Supplementary materials accompanying this paper appear online.
Masato Hisakado, Shintaro Mori
Published: 5 July 2019
Agent-Based Social Systems pp 65-79; https://doi.org/10.1007/978-981-10-7194-2_5

The publisher has not yet granted permission to display this abstract.
Masato Hisakado, Shintaro Mori
Published: 5 July 2019
Agent-Based Social Systems pp 119-139; https://doi.org/10.1007/978-981-10-7194-2_8

The publisher has not yet granted permission to display this abstract.
, Ben James, Leon Lagnado,
Published: 13 June 2019
Abstract:
The inherent noise of neural systems makes it difficult to construct models which accurately capture experimental measurements of their activity. While much research has been done on how to efficiently model neural activity with descriptive models such as linear-nonlinear-models (LN), Bayesian inference for mechanistic models has received considerably less attention. One reason for this is that these models typically lead to intractable likelihoods and thus make parameter inference difficult. Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations. We approximate the posterior distributions by updating a parametric prior distribution via Bayesian updating rules and show that model parameters can be efficiently estimated for synthetic and experimental data from in vivo two-photon experiments in the zebrafish retina. Also, we find that the model captures complex properties of the synaptic release such as the temporal precision and outperforms a standard GLM. Our framework provides a viable path forward for linking mechanistic models of neural activity to measured data.
Physical Review E, Volume 99; https://doi.org/10.1103/physreve.99.052307

Abstract:
In elections, the vote shares or turnout rates show a strong spatial correlation. The logarithmic decay with distance suggests that a two-dimensional (2D) noisy diffusive equation describes the system. Based on the study of U.S. presidential elections data, it was determined that the fluctuations of vote shares also exhibit a strong and long-range spatial correlation. Previously, it was considered difficult to induce strong and long-range spatial correlation of the vote shares without breaking the empirically observed narrow distribution. We demonstrate that a voter model on networks shows such a behavior. In the model, there are many voters in a node who are affected by the agents in the node and by the agents in the linked nodes. A multivariate Wright-Fisher diffusion equation for the joint probability density of the vote shares is derived. The stationary distribution is a multivariate generalization of the beta distribution. In addition, we also estimate the equilibrium values and the covariance matrix of the vote shares and obtain a correspondence with a multivariate normal distribution. This approach largely simplifies the calibration of the parameters in the modeling of elections.
Journal of Applied Statistics, Volume 46, pp 364-384; https://doi.org/10.1080/02664763.2018.1491535

Abstract:
A network cluster is defined as a set of nodes with ‘strong’ within group ties and ‘weak’ between group ties. Most clustering methods focus on finding groups of ‘densely connected’ nodes, where the dyad (or tie between two nodes) serves as the building block for forming clusters. However, since the unweighted dyad cannot distinguish strong relationships from weak ones, it then seems reasonable to consider an alternative building block, i.e. one involving more than two nodes. In the simplest case, one can consider the triad (or three nodes), where the fully connected triad represents the basic unit of transitivity in an undirected network. In this effort we propose a clustering framework for finding highly transitive subgraphs in an undirected/unweighted network, where the fully connected triad (or triangle configuration) is used as the building block for forming clusters. We apply our methodology to four real networks with encouraging results. Monte Carlo simulation results suggest that, on average, the proposed method yields good clustering performance on synthetic benchmark graphs, relative to other popular methods.
, Fumiaki Sano,
Journal of the Physical Society of Japan, Volume 87; https://doi.org/10.7566/jpsj.87.024002

Abstract:
This study discusses choice behavior using a voting model in which voters can obtain information from a finite number of previous r voters. Voters vote for a candidate with a probability proportional to the previous vote ratio, which is visible to the voters. We obtain the Pitman sampling formula as the equilibrium distribution of r votes. We present the model as a process of posting on a bulletin board system, 2ch.net, where users can choose one of many threads to create a post. We explore how this choice depends on the last r posts and the distribution of these last r posts across threads. We conclude that the posting process is described by our voting model with analog herders for a small r, which might correspond to the time horizon of users’ responses.
Physical Review E, Volume 94; https://doi.org/10.1103/physreve.94.052301

Abstract:
We study a simple model for social learning agents in a restless multiarmed bandit. There are N agents, and the bandit has M good arms that change to bad with the probability qc/N. If the agents do not know a good arm, they look for it by a random search (with the success probability qI) or copy the information of other agents' good arms (with the success probability qO) with probabilities 1−p or p, respectively. The distribution of the agents in M good arms obeys the Yule distribution with the power-law exponent 1+γ in the limit N,M→∞, and γ=1+(1−p)qIpqO. The system shows a phase transition at pc=qIqI+qo. For ppc), the variance of N1 per agent is finite (diverges as ∝N2−γ with N). There is a threshold value Ns for the system size that scales as lnNs∝1/(γ−1). The expected value of the number of the agents with a good arm N1 increases with p for N>Ns. For p>pc and N
, Jeffrey C. Jolley, Gregory S. Silver, Henry Yuen, Timothy A. Whitesel
Transactions of the American Fisheries Society, Volume 145, pp 1006-1017; https://doi.org/10.1080/00028487.2016.1185034

Abstract:
Some lamprey species are in decline, and assessments of local abundance could benefit research and conservation. In wadeable streams, larval lampreys are collected by using specialized backpack electrofishing techniques, although catchability has not been sufficiently evaluated. We assessed removal models for estimating the local abundance of larval lampreys in experimental net-pen enclosures within a wadeable stream. Known numbers of larvae were seeded at densities of 4–130 larvae/m2 into 1-m2 enclosures that were lined with fine sand and placed into Cedar Creek, Washington (Columbia River basin). Depletion sampling in each enclosure (n = 69) was conducted by three to five electrofishing passes, and abundance was estimated by six removal models that assumed different catchability functions. Catchability averaged 0.28. For the standard removal model, which assumed that catchability varied independently by enclosure but not by pass, the 95% highest posterior density credible intervals (95% HPD-CIs) were very large relative to the abundance estimates. Models assuming that catchability was either equal or a random factor among all enclosures and passes generally produced accurate (mean bias = −0.04) estimates of abundance, and 95% HPD-CIs were much smaller. Based on our data set, the expected bias of abundance estimates for 80% of simulations was less than 20% if five passes were completed from at least four randomly selected quadrats and if catchability was assumed to be a random factor. Additional sampling may be needed at low lamprey densities (especially <4 larvae/m2). Our results suggest that local abundance of larval lampreys in wadeable streams can be effectively estimated by depletion sampling at multiple 1-m2 quadrats and by use of a hierarchical removal model. Received June 12, 2015; accepted April 20, 2016 Published online August 5, 2016
Physica A: Statistical Mechanics and its Applications, Volume 450, pp 570-584; https://doi.org/10.1016/j.physa.2015.12.090

The publisher has not yet granted permission to display this abstract.
Masafumi Hino, Yosuke Irie, Masato Hisakado, Taiki Takahashi, Shintaro Mori
Journal of the Physical Society of Japan, Volume 85; https://doi.org/10.7566/jpsj.85.034002

Abstract:
We propose a method of detecting a phase transition in a generalized Pólya urn in an information cascade experiment. The method is based on the asymptotic behavior of the correlation C(t) between the first subject’s choice and the t + 1-th subject’s choice, the limit value of which, \(c \equiv \lim_{t \to \infty }C(t)\), is the order parameter of the phase transition. To verify the method, we perform a voting experiment using two-choice questions. An urn X is chosen at random from two urns A and B, which contain red and blue balls in different configurations. Subjects sequentially guess whether X is A or B using information about the prior subjects’ choices and the color of a ball randomly drawn from X. The color tells the subject which is X with probability q. We set \(q \in \{ 5/9,6/9,7/9,8/9\} \) by controlling the configurations of red and blue balls in A and B. The (average) lengths of the sequence of the subjects are 63, 63, 54.0, and 60.5 for \(q \in \{ 5/9,6/9,7/9,8/9\} \), respectively. We describe the sequential voting process by a nonlinear Pólya urn model. The model suggests the possibility of a phase transition when q changes. We show that c > 0 (= 0) for \(q = 5/9,6/9\)\((7/9,8/9)\) and detect the phase transition using the proposed method.
Published: 10 November 2015
Physical Review E, Volume 92; https://doi.org/10.1103/physreve.92.052112

Abstract:
We describe a universality class for the transitions of a generalized Pólya urn by studying the asymptotic behavior of the normalized correlation function C(t) using finite-size scaling analysis. X(1),X(2),... are the successive additions of a red (blue) ball [X(t)=1(0)] at stage t and C(t)≡Cov[X(1),X(t+1)]/Var[X(1)]. Furthermore, z(t)=∑s=1tX(s)/t represents the successive proportions of red balls in an urn to which, at the (t+1)th stage, a red ball is added [X(t+1)=1] with probability q[z(t)]=(tanh{J[2z(t)−1]+h}+1)/2,J≥0, and a blue ball is added [X(t+1)=0] with probability 1−q[z(t)]. A boundary [Jc(h),h] exists in the (J,h) plane between a region with one stable fixed point and another region with two stable fixed points for q(z). C(t)∼c+c′·tl−1 with c=0(>0) for JJc), and l is the (larger) value of the slope(s) of q(z) at the stable fixed point(s). On the boundary J=Jc(h),C(t)≃c+c′·(lnt)−α′ and c=0(c>0),α′=1/2(1) for h=0(h≠0). The system shows a continuous phase transition for h=0 and C(t) behaves as C(t)≃(lnt)−α′g[(1−l)lnt] with a universal function g(x) and a length scale 1/(1−l) with respect to lnt. β=ν||·α′ holds with β=1/2 and ν||=1.
, Rebecca E. Whitlock, Tommi A. Perälä, Paul A. Blomstedt, , Margarita María Rincón, Anna K. Kuparinen, Henni P. Pulkkinen,
ICES Journal of Marine Science, Volume 72, pp 2209-2222; https://doi.org/10.1093/icesjms/fsv117

Abstract:
This study presents a state-space modelling framework for the purposes of stock assessment. The stochastic population dynamics build on the notion of correlated survival and capture events among individuals. The correlation is thought to arise as a combination of schooling behaviour, a spatially patchy environment, and common but unobserved environmental factors affecting all the individuals. The population dynamics model isolates the key biological processes, so that they are not condensed into one parameter but are kept separate. This approach is chosen to aid the inclusion of biological knowledge from sources other than the assessment data at hand. The model can be tailored to each case by choosing appropriate models for the biological processes. Uncertainty about the model parameters and about the appropriate model structures is then described using prior distributions. Different combinations of, for example, age, size, phenotype, life stage, species, and spatial location can be used to structure the population. To update the prior knowledge, the model can be fitted to data by defining appropriate observation models. Much like the biological parameters, the observation models must also be tailored to fit each individual case.
, Julien Martin, Holly H. Edwards
Published: 1 July 2013
Ecology, Volume 94, pp 1472-1478; https://doi.org/10.1890/12-1365.1

Abstract:
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
, Michael H. Faber
Structure and Infrastructure Engineering, Volume 8, pp 497-506; https://doi.org/10.1080/15732479.2010.539068

Abstract:
In the present article, a methodology for the consideration of common cause effects in portfolio analysis is shown. A hierarchical approach is proposed which is especially useful for the modelling of large portfolios. It allows evaluating the performance of each single asset in the portfolio conditionally independent. In many cases, the performance of assets can be described using two states, namely failure and survival. For such assets, the analysis can be performed almost independent of the size of the portfolio by using a binomial model. A special focus is set on the different sources of common cause effects and their influence on the loss distribution in portfolios is shown and analysed. The results indicate a distinct influence of such effects especially in portfolios where the consequences are described by a nonlinear function. Common cause effects do not only arise from common load situation in the same geographical regions but also from common models used for the design of infrastructure facilities. In a general sense, dependencies in a portfolio can be considered as representing the system characteristics of the portfolio loss exceedance function. The identification of the main indicators influencing the loss exceedance curve for portfolios can help to optimise decisions for loss reduction, to understand system mechanisms and to identify and implement effective risk reducing measures. The present article aims to model, assess and discuss the effects influencing the loss exceedance function for a portfolio of assets with dependencies. A hierarchical approach is presented which facilitates the consideration and determination of the dependency structure in portfolios. The different effects on the portfolio loss exceedance function are investigated and discussed and illustrative examples are given.
Methods in Ecology and Evolution, Volume 2, pp 595-601; https://doi.org/10.1111/j.2041-210x.2011.00113.x

Abstract:
1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
, K. Kitsukawa, M. Hisakado
Published: 1 March 2011
Quantitative Finance, Volume 11, pp 391-405; https://doi.org/10.1080/14697680903419685

Abstract:
This paper generalizes Moody's correlated binomial default distribution for homogeneous (exchangeable) credit portfolios, which was introduced by Witt, to the case of inhomogeneous portfolios. We consider two cases of inhomogeneous portfolios. In the first case, we treat a portfolio whose assets have uniform default correlation and non-uniform default probabilities. We obtain the default probability distribution and study the effect of inhomogeneity. The second case corresponds to a portfolio with inhomogeneous default correlation. Assets are categorized into several different sectors and the inter-sector and intra-sector correlations are not the same. We construct the joint default probabilities and obtain the default probability distribution. We show that as the number of assets in each sector decreases, inter-sector correlation becomes more important than intra-sector correlation. We study the maximum values of the inter-sector default correlation. Our generalization method can be applied to any correlated binomial default distribution model that has explicit relations to the conditional default probabilities or conditional default correlations, e.g. Credit Risk+, implied default distributions. We also compare some popular CDO pricing models from the viewpoint of the range of the implied tranche correlation.
Journal of the Physical Society of Japan, Volume 79; https://doi.org/10.1143/jpsj.79.034001

Abstract:
We introduce a voting model and discuss the scale invariance in the mixing of candidates. The Candidates are classified into two categories $\mu\in \{0,1\}$ and are called as `binary' candidates. There are in total $N=N_{0}+N_{1}$ candidates, and voters vote for them one by one. The probability that a candidate gets a vote is proportional to the number of votes. The initial number of votes (`seed') of a candidate $\mu$ is set to be $s_{\mu}$. After infinite counts of voting, the probability function of the share of votes of the candidate $\mu$ obeys gamma distributions with the shape exponent $s_{\mu}$ in the thermodynamic limit $Z_{0}=N_{1}s_{1}+N_{0}s_{0}\to \infty$. Between the cumulative functions $\{x_{\mu}\}$ of binary candidates, the power-law relation $1-x_{1} \sim (1-x_{0})^{\alpha}$ with the critical exponent $\alpha=s_{1}/s_{0}$ holds in the region $1-x_{0},1-x_{1}<<1$. In the double scaling limit $(s_{1},s_{0})\to (0,0)$ and $Z_{0} \to \infty$ with $s_{1}/s_{0}=\alpha$ fixed, the relation $1-x_{1}=(1-x_{0})^{\alpha}$ holds exactly over the entire range $0\le x_{0},x_{1} \le 1$. We study the data on horse races obtained from the Japan Racing Association for the period 1986 to 2006 and confirm scale invariance.
Sandra Andersson, , Jenny-Ann Malmberg, Hans-Gustaf Ljunggren,
Published: 2 July 2009
Blood, Volume 114, pp 95-104; https://doi.org/10.1182/blood-2008-10-184549

Abstract:
Inhibitory killer cell immunoglobulin-like receptors (KIRs) preserve tolerance to self and shape the functional response of human natural killer (NK) cells. Here, we have evaluated the influence of selection processes in the formation of inhibitory KIR repertoires in a cohort of 44 donors homozygous for the group A KIR haplotype. Coexpression of multiple KIRs was more frequent than expected by the product rule that describes random association of independent events. In line with this observation, the probability of KIR acquisition increased with the cellular expression of KIRs. Three types of KIR repertoires were distinguished that differed in frequencies of KIR- and NKG2A-positive cells but showed no dependency on the number of self-HLA class I ligands. Furthermore, the distribution of self- and nonself-KIRs at the cell surface reflected a random combination of receptors rather than a selection process conferred by cognate HLA class I molecules. Finally, NKG2A was found to buffer overall functional responses in KIR repertoires characterized by low-KIR expression frequencies. The results provide new insights into the formation of inhibitory KIR repertoires on human NK cells and support a model in which variegated KIR repertoires are generated through sequential and random acquisition of KIRs in the absence of selection.
Physical Review E, Volume 77; https://doi.org/10.1103/physreve.77.056103

Abstract:
Fluctuating fluxes on a complex network lead to load fluctuations at the vertices, which may cause them to become overloaded and to induce a cascading failure. A characterization of the one-point load fluctuations is presented, revealing their dependence on the nature of the flux fluctuations and on the underlying network structure. Based on these findings, an alternate robustness layout of the network is proposed. Taking load correlations between the vertices into account, an analytical prediction of the probability for the network to remain fully efficient is confirmed by simulations. Compared to previously proposed mean-flux layouts, the alternate layout comes with significantly less investment costs in the high-confidence limit.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top