Refine Search

New Search

Results in Journal ACM SIGMETRICS Performance Evaluation Review: 3,396

(searched for: journal_id:(92262))
Page of 68
Articles per Page
by
Show export options
  Select all
Andrea Marin, Carey Williamson
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 53-61; https://doi.org/10.1145/3466826.3466845

Abstract:
Craps is a simple dice game that is popular in casinos around the world. While the rules for Craps, and its mathematical analysis, are reasonably straightforward, this paper instead focuses on the best ways to cheat at Craps, by using loaded (biased) dice. We use both analytical modeling and simulation modeling to study this intriguing dice game. Our modeling results show that biasing a die away from the value 1 or towards the value 5 lead to the best (and least detectable) cheating strategies, and that modest bias on two loaded dice can increase the winning probability above 50%. Our Monte Carlo simulation results provide validation for our analytical model, and also facilitate the quantitative evaluation of other scenarios, such as heterogeneous or correlated dice.
Nikolas Wehner, Michael Seufert, Joshua Schuler, Sarah Wassermann, Pedro Casas, Tobias Hossfeld
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 37-40; https://doi.org/10.1145/3466826.3466840

Abstract:
This paper addresses the problem of Quality of Experience (QoE) monitoring for web browsing. In particular, the inference of common Web QoE metrics such as Speed Index (SI) is investigated. Based on a large dataset collected with open web-measurement platforms on different device-types, a unique feature set is designed and used to estimate the RUMSI - an efficient approximation to SI, with machinelearning based regression and classification approaches. Results indicate that it is possible to estimate the RUMSI accurately, and that in particular, recurrent neural networks are highly suitable for the task, as they capture the network dynamics more precisely.
Shunsuke Higuchi, Junji Takemasa, Yuki Koizumi, Atsushi Tagami, Toru Hasegawa
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 45-48; https://doi.org/10.1145/3466826.3466842

Abstract:
This paper revisits longest prefix matching in IP packet forwarding because an emerging data structure, learned index, is recently presented. A learned index uses machine learning to associate key-value pairs in a key-value store. The fundamental idea to apply a learned index to an FIB is to simplify the complex longest prefix matching operation to a nearest address search operation. The size of the proposed FIB is less than half of an existing trie-based FIB while it achieves the computation speed nearly equal to the trie-based FIB. Moreover, the computation speed of the proposal is independent of the length of IP prefixes, unlike trie-based FIBs.
Jefferson E. Simoes, Eduardo Ferreira, Daniel S. Menasch´e, Carlos A. V. Campos
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 8-11; https://doi.org/10.1145/3466826.3466831

Abstract:
Cryptocurrencies typically aim at preserving the privacy of their users. Different cryptocurrencies preserve privacy at various levels, some of them requiring users to rely on strategies to raise the privacy level to their needs. Among those strategies, we focus on two of them: merge avoidance and mixing services. Such strategies may be adopted on top of virtually any blockchain-based cryptocurrency. In this paper, we show that whereas optimal merge avoidance leads to an NP-hard optimization problem, incentive-compatible mixing services are subject to a certain class of impossibility results. Together, our results contribute to the body of work on fundamental limits of privacy mechanisms in blockchainbased cryptocurrencies.
Giulio Masetti, Silvano Chiaradonna, Felicita Di Giandomenico, William H. Sanders, Brett Feddersen
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 62-67; https://doi.org/10.1145/3466826.3466846

Abstract:
Mobius is well known as a modeling and evaluation environment for performance and dependability indicators. It has been conceived in a modular and flexible fashion, to be easily expanded to incorporate new features, formalisms and tools. The need of modeling systems characterized by a large population of heterogeneous interacting components, which are nowadays more and more common in a variety of application contexts, provided the opportunity to focus on a new operator to efficiently manage non-anonymous replication, as requested for these systems. This tool paper presents the implementation of a new replication operator, called Advanced Rep, in Mobius. Efficiency of Advanced Rep is evaluated against a recently developed alternative solution.
Dena Markudova, Martino Trevisan, Paolo Garza, Michela Meo, Maurizio M. Munafo, Giovanna Carofiglio
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 41-44; https://doi.org/10.1145/3466826.3466841

Abstract:
With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.
Rowel Gundlach, Martijn Gijsbers, David Koops, Jacques Resing
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 16-19; https://doi.org/10.1145/3466826.3466833

Abstract:
We study the distribution of confirmation times of Bitcoin transactions, conditional on the size of the current memory pool. We argue that the time until a Bitcoin transaction is confirmed resembles the time to ruin in a corresponding Cramer-Lundberg process. This well-studied model gives mathematical insights in the mempool behaviour over time. Specifically, for situations where one chooses a fee, such that the total size of incoming transactions with higher fee is close to the total size of transactions leaving the mempool (heavy traffic), a diffusion approximation leads to an inverse Gaussian distribution for the confirmation times. The results of this paper are particularly interesting for users that want to make a Bitcoin transaction during heavy-traffic situations, as evaluation of the well-known inverse Gaussian distribution is computationally straightforward.
Vinicius C. Oliveira, Julia Almeida Valadares, Jose Eduardo A. Sousa, Alex Borges Vieira, Heder Soares Bernardino, Saulo Moraes Villela, Glauber Dias Goncalves
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 12-15; https://doi.org/10.1145/3466826.3466832

Abstract:
Ethereum has emerged as one of the most important cryptocurrencies in terms of the number of transactions. Given the recent growth of Ethereum, the cryptocurrency community and researchers are interested in understanding the Ethereum transactions behavior. In this work, we investigate a key aspect of Ethereum: the prediction of a transaction confirmation or failure based on its features. This is a challenging issue due to the small, but still relevant, fraction of failures in millions of recorded transactions and the complexity of the distributed mechanism to execute transactions in Ethereum. To conduct this investigation, we train machine learning models for this prediction, taking into consideration carefully balanced sets of confirmed and failed transactions. The results show high-performance models for classification of transactions with the best values of F1-score and area under the ROC curve approximately equal to 0.67 and 0.87, respectively. Also, we identified the gas used as the most relevant feature for the prediction.
Luca Vassio, Zhi-Li Zhang, Danilo Giordano, Abhishek Chandra
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 28-28; https://doi.org/10.1145/3466826.3466837

Abstract:
We are pleased to welcome you to the 2nd Workshop on AI in Networks and Distributed Systems. This year we have expanded the scope of the workshop to include applications of Machine Learning and AI not merely in Networking, but also in Distributed Systems. The scale and complexity of today's networks and distributed systems make their design, analysis, optimization, and management a daunting task. Hence smart and scalable approaches leveraging machine learning solutions are increasingly called for to take full advantage of these systems and infrastructures.
William Knottenbelt, Katinka Wolter
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 2-2; https://doi.org/10.1145/3466826.3466828

Abstract:
This volume presents the proceedings of the 2nd Symposium of Cryptocurrency Analysis (SOCCA 2020), originally scheduled to be held in Milan, Italy, on November 6, 2020. The COVID-19 pandemic has necessitated, in common with many other conferences, that SOCCA will be held entirely virtual.
Gastón García González, Pedro Casas, Alicia Fernández, Gabriel Gómez
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 49-52; https://doi.org/10.1145/3466826.3466843

Abstract:
Despite the many attempts and approaches for anomaly de- tection explored over the years, the automatic detection of rare events in data communication networks remains a com- plex problem. In this paper we introduce Net-GAN, a novel approach to network anomaly detection in time-series, us- ing recurrent neural networks (RNNs) and generative ad- versarial networks (GAN). Different from the state of the art, which traditionally focuses on univariate measurements, Net-GAN detects anomalies in multivariate time-series, ex- ploiting temporal dependencies through RNNs. Net-GAN discovers the underlying distribution of the baseline, multi- variate data, without making any assumptions on its nature, offering a powerful approach to detect anomalies in com- plex, difficult to model network monitoring data. We further exploit the concepts behind generative models to conceive Net-VAE, a complementary approach to Net-GAN for net- work anomaly detection, based on variational auto-encoders (VAE). We evaluate Net-GAN and Net-VAE in different monitoring scenarios, including anomaly detection in IoT sensor data, and intrusion detection in network measure- ments. Generative models represent a promising approach for network anomaly detection, especially when considering the complexity and ever-growing number of time-series to monitor in operational networks.
Özge Celenk, Thomas Bauschert, Marcus Eckert
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 33-36; https://doi.org/10.1145/3466826.3466839

Abstract:
Quality of Experience (QoE) monitoring of video streaming traffic is crucial task for service providers. Nowadays it is challenging due to the increased usage of end-to-end encryption. In order to overcome this issue, machine learning (ML) approaches for QoE monitoring have gained popularity in the recent years. This work proposes a framework which includes a machine learning pipeline that can be used for detecting key QoE related events such as buffering events and video resolution changes for ongoing YouTube video streaming sessions in real-time. For this purpose, a ML model has been trained using YouTube streaming traffic collected from Android devices. Later on, the trained ML model is deployed in the framework's pipeline to make online predictions. The ML model uses statistical traffic information observed from the network-layer for learning and predicting the video QoE related events. It reaches 88% overall testing accuracy for predicting the video events. Although our work is yet at an early stage, the application of the ML model for online detection and prediction of video events yields quite promising results.
Ivo Stoepker, Rowel Gundlach, Stella Kapodistria
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 20-23; https://doi.org/10.1145/3466826.3466834

Abstract:
Bitcoin payments require a random amount of time to get confirmed (i.e. to be grouped by the miners into a block and to be added to the Bitcoin blockchain). In [8, 11], the authors propose the modelling of the Bitcoin confirmation time by the so-called time to ruin of the Cramer-Lundberg (CL) model. This provides off-the-shelf results directly aimed at predicting the confirmation time. However, analyses suggest that the data may not fully conform with the CL model assumptions. In this manuscript, we show by means of a robustness analysis that the time to ruin of a CL model is near insensitive to small changes in the model assumptions and illustrate that the proposed heuristic model can be used to accurately predict the confirmation times even when the data deviate (to a small degree) from the model assumptions.
Jose Eduardo A. Sousa, Vinicius C. Oliveira, Julia Almeida Valadares, Alex Borges Vieira, Heder S. Bernardino, Saulo Moraes Villela, Glauber Dias Goncalves
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 24-27; https://doi.org/10.1145/3466826.3466835

Abstract:
Ethereum is one of the most popular cryptocurrency currently and it has been facing security threats and attacks. As a consequence, Ethereum users may experience long periods to validate transactions. Despite the maintenance on the Ethereum mechanisms, there are still indications that it remains susceptible to a sort of attacks. In this work, we analyze the Ethereum network behavior during an under-priced DoS attack, where malicious users try to perform denial-of-service attacks that exploit flaws in the fee mechanism of this cryptocurrency. We propose the application of machine learning techniques and ensemble methods to detect this attack, using the available transaction attributes. The proposals present notable performance as the Decision Tree models, with AUC-ROC, F-score and recall larger than 0.94, 0.82, and 0.98, respectively.
Felipe Ribas Coutinho, Victor Pires, Claudio Miceli, Daniel S. Menasche
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 4-7; https://doi.org/10.1145/3466826.3466830

Abstract:
Blockchains and cryptocurrencies disrupted the conversion of energy into a medium of exchange. Numerous applications for blockchains and cryptocurrencies are now envisioned for purposes ranging from inventory control to banking applications. Naturally, in order to mine in an economically viable way, regions where energy is plentiful and cheap, e.g., close to hydroelectric plants, are sought. The possibility of converting energy into cash, however, also opens up opportunities for a new kind of cyber attack aimed at illegally mining cryptocurrencies by stealing energy. In this work, we indicate, using data from January and February of 2018 from our university, that such a threat is real, and present a projection of the gains derived from these attacks.
Ingo Weber
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 3-3; https://doi.org/10.1145/3466826.3466829

Abstract:
Blockchain is a novel distributed ledger technology. Through its features and smart contract capabilities, a wide range of application areas opened up for blockchain-based innovation [5]. In order to analyse how concrete blockchain systems as well as blockchain applications are used, data must be extracted from these systems. Due to various complexities inherent in blockchain, the question how to interpret such data is non-trivial. Such interpretation should often be shared among parties, e.g., if they collaborate via a blockchain. To this end, we devised an approach codify the interpretation of blockchain data, to extract data from blockchains accordingly, and to output it in suitable formats [1, 2]. This work will be the main topic of the keynote. In addition, application developers and users of blockchain applications may want to estimate the cost of using or operating a blockchain application. In the keynote, I will also discuss our cost estimation method [3, 4]. This method was designed for the Ethereum blockchain platform, where cost also relates to transaction complexity, and therefore also to system throughput.
Rajib Hossen, Mohammad A. Islam
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 29-32; https://doi.org/10.1145/3466826.3466838

Abstract:
Offloading resource-hungry tasks from mobile devices to an edge server has been explored recently to improve task com- pletion time as well as save battery energy. The low la- tency computing resource from edge servers are a perfect companion to realize such task offloading. However, edge servers may su er from unreliable performance due to its rapid workload variation and reliance on intermittent re- newable energy. Further, batteries in mobile devices make online optimum offloading decisions challenging since it in- tertwines offloading decisions across di erent tasks. In this paper, we propose a deep Q-learning based task offloading solution, DeepTO, for online task offloading. DeepTO learns edge server performance in a model-free manner and takes future battery needs of the mobile device into account. Us- ing a simulation-based evaluation, we show that DeepTO can perform close to the optimum solution that has com- plete future knowledge.
Niloofar Bayat, Richard Ma, Vishal Misra, Dan Rubenstein
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 130-135; https://doi.org/10.1145/3453953.3453983

Abstract:
An objective of network neutrality is to design regulations for the Internet and ensure that it remains a public, open platform where innovations can thrive. While there is broad agreement that preserving the content quality of service falls under the purview of net neutrality, the role of differential pricing, especially the practice of zero-rating remains controversial. In this paper, we model zero-rating between Internet service providers (ISPs) and content providers (CPs) and show if zero-rating is permitted, the competitiveness in the market is reduced, where low-income CPs often lose utility and high-income CPs often gain utility.
Yingdong Lu, Mark S. Squillante, Tonghoon Suk
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 122-127; https://doi.org/10.1145/3453953.3453981

Abstract:
We consider a fluid model of n x n input-queued switches with associated fluid-flow costs and derive an optimal scheduling control policy to an infinite horizon discounted control problem with a general linear objective function of fluid cost. Our optimal policy coincides with the cμ-rule in certain parameter domains, but more generally, takes the form of the solution to a flow maximization problem. Computational experiments demonstrate the benefits of our optimal scheduling policy over variants of max-weight scheduling and the cμ-rule.
Daniela Hurtado-Lange, Siva Theja Maguluri
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 33-34; https://doi.org/10.1145/3453953.3453959

Abstract:
Stochastic Processing Networks that model wired and wireless networks, and other queueing systems, have been studied in heavytraffic limit under the so-called Complete Resource Pooling (CRP) condition. When the CRP condition is not satisfied, heavy-traffic results are known only in the special case of an input-queued switch and bandwidth-sharing network. In this paper, we consider a very general queueing system called the 'generalized switch' that includes wireless networks under fading, data center networks, input-queued switch, etc. The primary contribution of this paper is to present the exact value of the steadystate mean of certain linear combinations of queue lengths in the heavy-traffic limit under MaxWeight scheduling algorithm. We use the Drift method, and we also present a negative result that it is not possible to obtain the remaining linear combinations (and consequently all the individual mean queue lengths) using this method. We do this by presenting an alternate view of the Drift method in terms of an (under-determined) system of linear equations. Finally, we use this system of equations to obtain upper and lower bounds on all linear combinations of queue lengths.
Yuezhou Liu, Yuanyuan Li, Qian Ma, Stratis Ioannidis, Edmund Yeh
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 89-90; https://doi.org/10.1145/3453953.3453973

Abstract:
We study fair content allocation strategies in caching networks through a utility-driven framework, where each request achieves a utility of its caching gain rate. The resulting problem is NP-hard. Submodularity allows us to devise a deterministic allocation strategy with an optimality guarantee factor arbitrarily close to 1-1/e. When 0 < α ≤ 1, we further propose a randomized strategy that attains an improved optimality guarantee, (1 - 1/e)1-α, in expectation. Through extensive simulations over synthetic and real-world network topologies, we evaluate the performance of our proposed strategies and discuss the effect of fairness.
Niels Christensen, Mark Glavind, Stefan Schmid, Jiří Srba
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 14-26; https://doi.org/10.1145/3453953.3453957

Abstract:
Emerging software-defined and programmable networking technologies enable more adaptive communication infrastructures. However, leveraging these flexibilities and operating networks more adaptively is challenging, as the underlying infrastructure remains a complex distributed system that is a subject to delays, and as consistency properties need to be preserved transiently, even during network reconfiguration. Motivated by these challenges, we propose Latte, an automated approach to minimize the latency of network update schedules by avoiding unnecessary waiting times and exploiting concurrency, while at the same time provably ensuring a wide range of fundamental consistency properties like waypoint enforcement. To enable automated reasoning about the performance and consistency of software-defined networks during an update, we introduce the model of timed-arc colored Petri nets: an extension of Petri nets which allows us to account for time aspects in asynchronous networks, including characteristic timing behaviors, modeled as timed and colored tokens. This novel formalism may be of independent interest. Latte relies on an efficient translation of specific network update problems into timed-arc colored Petri nets. We show that the constructed nets can be analyzed efficiently via their unfolding into existing timed-arc Petri nets. We integrate Latte into the state-of-the-art model checking tool TAPAAL, and find that in many cases, we are able to reduce the latency of network updates by 90% or more.
Rahul Vaze, Jayakrishnan Nair
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 61-62; https://doi.org/10.1145/3453953.3453967

Abstract:
Speed scaling for a network of servers represented by a directed acyclic graph is considered. Jobs arrive at a source server, with a specified destination server, and are defined to be complete once they are processed by all servers on any feasible path between the source and the corresponding destination. Each server has variable speed, with power consumption function P, a convex increasing function of the speed. The objective is to minimize the sum of the flow time (summed across jobs) and the energy consumed by all the servers, which depends on how jobs are routed, as well as how server speeds are set. Algorithms are derived for both the worst case and stochastic job arrivals setting, whose competitive ratio depends only on the power functions and path diversity in the network, but is independent of the workload.
Mohammad A. Hoque, Ashwin Rao, Sasu Tarkoma
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 6-11; https://doi.org/10.1145/3453953.3453955

Abstract:
Modern mobile systems are optimized for energy-efficient computation and communications, and these optimizations affect the way they use the network, and thus the performance of the applications. Therefore, understanding network and application performance are essential for debugging, improving user experience, and performance comparison. In recent years, several tools have emerged that analyze network performance of mobile applications in situ with the help of the VPN service. However, there is a limited understanding of how these measurement tools and system optimizations affect the network and application performance. This paper first demonstrates that mobile systems employ energy-aware system hardware tuning, affecting network latency and throughput. We next show that the VPN-based tools, such as Lumen, PrivacyGuard, and Video Optimizer, aid in ambiguous network performance measurements and degrade the application performance. Our findings suggest that sound Internet traffic measurement on Android devices requires a good understanding of the device, networks, measurement tools, and applications.
Kristen Gardner, Jazeem Abdul Jaleel, Alexander Wickeham, Sherwin Doroudi
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 37-38; https://doi.org/10.1145/3453953.3453961

Abstract:
In large-scale computer systems, deciding how to dispatch arriving jobs to servers is a primary factor affecting system performance. Consequently, there is a wealth of literature on designing, analyzing, and evaluating the performance of load balancing policies. For analytical tractability, most existing work on dispatching in large-scale systems makes a key assumption: that the servers are homogeneous, meaning that they all have the same speeds, capabilities, and available resources. But this assumption is not accurate in practice. Modern computer systems are instead heterogeneous: server farms may consist of multiple generations of hardware, servers with varied resources, or even virtual machines running in a cloud environment. Given the ubiquity of heterogeneity in today's systems, it is critically important to develop load balancing policies that perform well in heterogeneous environments. In this paper, we focus on systems in which server speeds are heterogeneous.
Guocong Quan, Atilla Eryilmaz, Jian Tan, Ness Shroff
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 77-78; https://doi.org/10.1145/3453953.3453970

Abstract:
In practice, prefetching data strategically has been used to improve caching performance. The idea is that data items can either be cached upon request (traditional approach) or prefetched into the cache before the requests actually occur. The caching and prefetching operations compete for the limited cache space, whose size is typically much smaller than the number of data items. A key challenge is to design an optimal prefetching and caching policy, assuming that the future requests can be predicted to a certain extent. This is a non-trivial challenge even under the idealized assumption that future requests are precisely known.
Jianyu Niu, Ziyu Wang, Fangyu Gai, Chen Feng
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 59-60; https://doi.org/10.1145/3453953.3453966

Abstract:
Bitcoin-NG is among the first scalable blockchain protocols by decoupling blockchain operation into two planes: leader election and transaction serialization. Its decoupling idea has inspired a new generation of blockchain protocols. However, the existing incentive analysis of Bitcoin-NG has several limitations. First, the impact of network capacity is ignored. Second, an integrated incentive analysis that jointly considers both key blocks and microblocks is still missing. In this paper, we aim to address these two limitations. First, we propose a new incentive analysis that takes the network capacity into account, showing that Bitcoin-NG can still maintain incentive compatibility against the microblock mining attack even under limited network capacity. Second, we leverage a Markov decision process to jointly analyze the incentive of both key blocks and microblocks, showing that the selfish mining revenue of Bitcoin-NG is a little higher than that in Bitcoin only when the selfish miner controls more than 35% of the mining power. We hope that our in-depth incentive analysis for Bitcoin-NG can shed some light on the mechanism design and incentive analysis of nextgeneration blockchain protocols.
Sounak Kar, Robin Rehrmann, Arpan Mukhopadhyay, Bastian Alt, Florin Ciucu, Heinz Koeppl, Carsten Binnig, Amr Rizk
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 128-129; https://doi.org/10.1145/3453953.3453982

Abstract:
We analyze a data-processing system with n clients producing jobs which are processed in batches by m parallel servers; the system throughput critically depends on the batch size and a corresponding sub-additive speedup function that arises due to overhead amortization. In practice, throughput optimization relies on numerical searches for the optimal batch size which is computationally cumbersome. In this paper, we model this system in terms of a closed queueing network assuming certain forms of service speedup; a standard Markovian analysis yields the optimal throughput in w n4 time. Our main contribution is a mean-field model that has a unique, globally attractive stationary point, derivable in closed form. This point characterizes the asymptotic throughput as a function of the batch size that can be calculated in O(1) time. Numerical settings from a large commercial system reveal that this asymptotic optimum is accurate in practical finite regimes.
Benjamin Berg, Rein Vesilo, Mor Harchol-Balter
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 35-36; https://doi.org/10.1145/3453953.3453960

Abstract:
Modern data centers serve workloads which can exploit parallelism. When a job parallelizes across multiple servers it completes more quickly. However, it is unclear how to share a limited number of servers between many parallelizable jobs. In this paper we consider a typical scenario where a data center composed of N servers will be tasked with completing a set of M parallelizable jobs. Typically, M is much smaller than N. In our scenario, each job consists of some amount of inherent work which we refer to as a job's size. We assume that job sizes are known up front to the system, and each job can utilize any number of servers at any moment in time. These assumptions are reasonable for many parallelizable workloads such as training neural networks using TensorFlow [2]. Our goal in this paper is to allocate servers to jobs so as to minimize the mean slowdown across all jobs, where the slowdown of a job is the job's completion time divided by its running time if given exclusive access to all N servers. Slowdown measures how a job was interfered with by other jobs in the system, and is often the metric of interest in the theoretical parallel scheduling literature (where it is also called stretch), as well as the HPC community (where it is called expansion factor).
Maryam Elahi, Andrea Marin, Sabina Rossi, Carey Williamson
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 97-98; https://doi.org/10.1145/3453953.3453975

Abstract:
In this paper, we study a variant of PS+PS multilevel scheduling, which we call the PS+IS queue. Specifically, we use Processor Sharing (PS) at both queues, but with linear frequency scaling on the second queue, so that the latter behaves like an Infinite Server (IS) queue. The goals of the system are low response times for small jobs in the first queue, and reduced power consumption for large jobs in the second queue. The novelty of our model includes the frequency scaling at the second queue, and the batch arrival process at the second queue induced by the busy period structure of the first queue which has strictly higher priority. We derive a numerical solution for the PS+IS queueing system in steady-state, and then study its properties under workloads obtained from fitting of TCP flow traces. The simulation results confirm the e
Guin Gilman, Samuel S. Ogden, Tian Guo, Robert J. Walls
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 81-88; https://doi.org/10.1145/3453953.3453972

Abstract:
In this work, we empirically derive the scheduler's behavior under concurrent workloads for NVIDIA's Pascal, Volta, and Turing microarchitectures. In contrast to past studies that suggest the scheduler uses a round-robin policy to assign thread blocks to streaming multiprocessors (SMs), we instead find that the scheduler chooses the next SM based on the SM's local resource availability. We show how this scheduling policy can lead to significant, and seemingly counter-intuitive, performance degradation; for example, a decrease of one thread per block resulted in a 3.58X increase in execution time for one kernel in our experiments. We hope that our work will be useful for improving the accuracy of GPU simulators and aid in the development of novel scheduling algorithms.
Xingyu Zhou, Ness Shroff, Adam Wierman
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 57-58; https://doi.org/10.1145/3453953.3453965

Abstract:
We consider the load balancing problem in large-scale heterogeneous systems with multiple dispatchers. We introduce a general framework called Local-Estimation-Driven (LED). Under this framework, each dispatcher keeps local (possibly outdated) estimates of the queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. The local estimates are updated via infrequent communications between dispatchers and servers. We derive sufficient conditions for LED policies to achieve throughput optimality and delay optimality in heavy-traffic, respectively. These conditions directly imply delay optimality for many previous local-memory based policies in heavy traffic. Moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. Finally, the heavy-traffic delay optimality of the LED framework also sheds light on a recent open question on how to design optimal load balancing schemes using delayed information.
Ziv Scully, Isaac Grosof, Mor Harchol-Balter
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 136-137; https://doi.org/10.1145/3453953.3453984

Abstract:
We consider scheduling to minimize mean response time of the M/G/k queue with unknown job sizes. In the singleserver k = 1 case, the optimal policy is the Gittins policy, but it is not known whether Gittins or any other policy is optimal in the multiserver case. Exactly analyzing the M/G/k under any scheduling policy is intractable, and Gittins is a particularly complicated policy that is hard to analyze even in the single-server case.
Nitish K. Panigrahy, Philippe Nain, Giovanni Neglia, Don Towsley
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 138-143; https://doi.org/10.1145/3453953.3453985

Abstract:
Caching systems have long been crucial for improving the performance of a wide variety of network and web based online applications. In such systems, end-to-end application performance heavily depends on the fraction of objects transfered from the cache, also known as the cache hit probability. Many cache eviction policies have been proposed and implemented to improve the hit probability. In this work, we propose a new method to compute an upper bound on hit probability for all non-anticipative caching policies, i.e. for policies that have no knowledge of future requests. At each object request arrival, we use hazard rate (HR) function based ordering to classify the request as a hit or not. Under some statistical assumptions, we prove that our proposed HR based ordering model computes the maximum achievable hit probability and serves as an upper bound for all non-anticipative caching policies. We also provide simulation results to validate its correctness and to compare it to Belady's upper bound. We find it to almost always be tighter than Belady's bound.
MyungSuk Kim, MyoungJun Chun, Duwon Hong, Yoona Kim, GeonHee Cho, Dusol Lee, Jihong Kim
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 120-121; https://doi.org/10.1145/3453953.3453980

Abstract:
NAND flash memory has revolutionized how we manage data in modern digital systems, significant improvements are needed in flash-based storage systems to meet the requirements of emerging data-intensive applications. In this paper, we address the problem of NAND aging markers that represent the wearing degree of NAND cells. Since all flash operations are affected by the wearing status of NAND cells, an accurate NAND aging marker is critical to develop flash optimization techniques. From our evaluation study, we first show that the existing P/E cyclebased aging marker (PeWear) is inadequate to estimate the actual aging status of NAND blocks, thus losing opportunities for further optimizations. To overcome the limitations of PeWear, we propose a new NAND aging marker, RealWear, based on extensive characterization studies using real 3D TLC flash chips. By considering multiple variables that can affect the NAND cell wear, RealWear can accurately indicate the actual wear status of NAND blocks during run time. Using three case studies, we demonstrate that RealWear is effective in enhancing the lifetime and performance of a flash storage system. Our experimental results showed that RealWear can extend the lifetime of individual NAND blocks by 63% and can reduce the GC overhead by 21%. Furthermore, RealWear significantly mitigates read latency fluctuations, guaranteeing that the read latency can be bounded with at most 2 read retry operations.
Jingfan Meng, Long Gong, Jun (Jim) Xu
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 71-76; https://doi.org/10.1145/3453953.3453969

Abstract:
In this work, we first propose a parallel batch switching algorithm called Small-Batch Queue-Proportional Sampling (SB-QPS). Compared to other batch switching algorithms, SB-QPS significantly reduces the batch size without sacrificing the throughput performance and hence has much lower delay when traffic load is light to moderate. It also achieves the lowest possible time complexity of O(1) per matching computation per port, via parallelization. We then propose another algorithm called Sliding-Window QPS (SW-QPS). SW-QPS retains and enhances all benefits of SB-QPS, and reduces the batching delay to zero via a novel switching framework called sliding-window switching. In addition, SW-QPS computes matchings of much higher qualities, as measured by the resulting throughput and delay performances, than QPS-1, the state-of-the-art regular switching algorithm that builds upon the same underlying bipartite matching algorithm.
Francesco Bronzino, Paul Schmitt, Sara Ayoubi, Guilherme Martins, Renata Teixeira, Nick Feamster
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 27-32; https://doi.org/10.1145/3453953.3453958

Abstract:
Inferring the quality of streaming video applications is important for Internet service providers, but the fact that most video streams are encrypted makes it difficult to do so.We develop models that infer quality metrics (i.e., startup delay and resolution) for encrypted streaming video services. Our paper builds on previous work, but extends it in several ways. First, the models work in deployment settings where the video sessions and segments must be identified from a mix of traffic and the time precision of the collected traffic statistics is more coarse (e.g., due to aggregation). Second, we develop a single composite model that works for a range of different services (i.e., Netflix, YouTube, Amazon, and Twitch), as opposed to just a single service. Third, unlike many previous models, our models perform predictions at finer granularity (e.g., the precise startup delay instead of just detecting short versus long delays) allowing to draw better conclusions on the ongoing streaming quality. Fourth, we demonstrate the models are practical through a 16-month deployment in 66 homes and provide new insights about the relationships between Internet "speed" and the quality of the corresponding video streams, for a variety of services; we find that higher speeds provide only minimal improvements to startup delay and resolution.
Gayane Vardoyan, Saikat Guha, Philippe Nain, Don Towsley
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 79-80; https://doi.org/10.1145/3453953.3453971

Abstract:
Protocols that exploit quantum communication technology offer two advantages: they can either extend or render feasible the capabilities of their classical counterparts, or they exhibit functionality entirely unachievable through classical means alone. For an example of the former, quantum key distribution protocols such as E91 [2] and BBM92 [1] can in principle yield information-theoretic security by using entanglement to generate secure key bits. These raw secret key bits can then be distilled into a one-time pad to encode messages sent between two parties. For an example of the latter, distributed quantum sensing frameworks such as [3] and [11] employ entanglement to overcome the standard quantum limit [4].
Shiva Raj Pokhrel, Carey Williamson
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 63-70; https://doi.org/10.1145/3453953.3453968

Abstract:
Network utility maximization (NUM) for Multipath TCP (MPTCP) is a challenging task, since there is no well-defined utility function for MPTCP [6]. In this paper, we identify the conditions under which we can use Kelly's NUM mechanism, and explicitly compute the equilibrium. We obtain this equilibrium by using Tullock's rent-seeking framework from game theory to define a utility function for MPTCP. This approach allows us to design MPTCP algorithms with common delay and/or loss constraints at the subflow level. Furthermore, this utility function has diagonal strict concavity, which guarantees a globally unique (normalized) equilibrium.
Lianjie Shi, Xin Wang, Richard T. B. Ma
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 4-5; https://doi.org/10.1145/3453953.3453954

Abstract:
With the increasing popularity and significance of content delivery services, especially video streaming, stringent Quality of Service (QoS) requirements have been placed upon Internet content providers (CPs). As a result, CPs have strong incentives to pay for resources such as premium peering bandwidth and cache capacity that ameliorate service quality. In this paper, we study an Internet access market with multiple access resources that CPs can purchase to enhance service quality. We build a detailed content delivery model, through which we characterize the traffic throughput and the resulting utilities of CPs and social welfare. We show that a market equilibrium exists for a multi-resource market, at which the optimization problems for individual utilities and social welfare coincide. Furthermore, we characterize the optimal purchasing strategies of CPs as well as how varying parameters are going to change the market equilibrium via sensitivity analysis.
Behnam Pourghassemi, Ardalan Amiri Sani, Aparna Chandramowlishwaran
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 113-119; https://doi.org/10.1145/3453953.3453979

Abstract:
Causal profiling is a novel and powerful profiling technique that quantifies the potential impact of optimizing a code segment on the program runtime. A key application of causal profiling is to analyze what-if scenarios which typically require a large number of experiments. Besides, the execution of a program highly depends on the underlying machine resources, e.g., CPU, network, storage, so profiling results on one device does not translate directly to another. This is a major bottleneck in our ability to perform scalable performance analysis and greatly limits cross-platform software development. In this paper, we address the above challenges by leveraging a unique property of causal profiling: only relative performance of different resources affects the result of causal profiling, not their absolute performance. We first analytically model and prove causal profiling, the missing piece in the seminal paper. Then, we assert the necessary condition to achieve virtual causal profiling on a secondary device. Building upon the theory, we design VCoz, a virtual causal profiler that enables profiling applications on target devices using measurements on the host device. We implement a prototype of VCoz by tuning multiple hardware components to preserve the relative execution speeds of code segments. Our experiments on benchmarks that stress different system resources demonstrate that VCoz can generate causal profiling reports of Nexus 6P (an ARM-based device) on a host MacBook (x86 architecture) with less than 16% variance.
Shigeo Shioda
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 91-96; https://doi.org/10.1145/3453953.3453974

Abstract:
The consensus achieved in the consensus-forming algorithm is not generally a constant but rather a random variable, even if the initial opinions are the same. In the present paper, we investigate the statistical properties of the consensus in a broadcasting-based consensus-forming algorithm. We focus on two extreme cases: consensus forming by two agents and consensus forming by an infinite number of agents. In the two-agent case, we derive several properties of the distribution function of the consensus. In the infinite-numberof- agents case, we show that if the initial opinions follow a stable distribution, then the consensus also follows a stable distribution. In addition, we derive a closed-form expression of the probability density function of the consensus when the initial opinions follow a Gaussian distribution, a Cauchy distribution, or a L´evy distribution.
Wenkai Dai, Klaus-Tycho Foerster, David Fuchssteiner, Stefan Schmid
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 39-44; https://doi.org/10.1145/3453953.3453962

Abstract:
Emerging reconfigurable data centers introduce the unprecedented flexibility in how the physical layer can be programmed to adapt to current traffic demands. These reconfigurable topologies are commonly hybrid, consisting of static and reconfigurable links, enabled by e.g. an Optical Circuit Switch (OCS) connected to top-of-rack switches in Clos networks. Even though prior work has showcased the practical benefits of hybrid networks, several crucial performance aspects are not well understood. In this paper, we study the algorithmic problem of how to jointly optimize topology and routing in reconfigurable data centers with a known traffic matrix, in order to optimize a most fundamental metric, maximum link load. We chart the corresponding algorithmic landscape by investigating both un-/splittable flows and (non-)segregated routing policies. We moreover prove that the problem is not submodular for all these routing policies, even in multi-layer trees, where a topological complexity classification of the problem reveals that already trees of depth two are intractable. However, networks that can be abstracted by a single packet switch (e.g., nonblocking Fat-Tree topologies) can be optimized efficiently, and we present optimal polynomialtime algorithms accordingly. We complement our theoretical results with trace-driven simulation studies, where our algorithms can significantly improve the network load in comparison to the state of the art.
Marcin Bienkowski, David Fuchssteiner, Bienkowski Marcin, Stefan Schmid
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 99-108; https://doi.org/10.1145/3453953.3453976

Abstract:
This paper initiates the study of online algorithms for the maximum weight b-matching problem, a generalization of maximum weight matching where each node has at most b≥1 adjacent matching edges. The problem is motivated by emerging optical technologies which allow to enhance datacenter networks with reconfigurable matchings, providing direct connectivity between frequently communicating racks. These additional links may improve network performance, by leveraging spatial and temporal structure in the workload. We show that the underlying algorithmic problem features an intriguing connection to online paging (a.k.a. caching), but introduces a novel challenge. Our main contribution is an online algorithm which is O(b)- competitive; we also prove that this is asymptotically optimal. We complement our theoretical results with extensive trace-driven simulations, based on real-world datacenter workloads as well as synthetic traffic traces.
Yu Huang, Longbo Huang
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 109-110; https://doi.org/10.1145/3453953.3453977

Abstract:
In this paper, we propose a class of approximation algorithms for max-weight matching (MWM) policy for input-queued switches, called expected 1-APRX. We establish the state space collapse (SSC) result for expected 1-APRX, and characterize its queue length behavior in the heavy-traffic limit.
Simon Scherrer, Markus Legner, Adrian Perrig, Stefan Schmid
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 12-13; https://doi.org/10.1145/3453953.3453956

Abstract:
By delegating path control to end-hosts, future Internet architectures offer flexibility for path selection. However, a concern arises that the distributed routing decisions by endhosts, in particular load-adaptive routing, can lead to oscillations if path selection is performed without coordination or accurate load information. Prior research has addressed this problem by devising local path-selection policies that lead to global stability. However, little is known about the viability of these policies in the Internet context, where selfish end-hosts can deviate from a prescribed policy if such a deviation is beneficial from their individual perspective. In order to achieve network stability in future Internet architectures, it is essential that end-hosts have an incentive to adopt a stability-oriented path-selection policy. In this work, we perform the first incentive analysis of the stability-inducing path-selection policies proposed in the literature. Building on a game-theoretic model of end-host path selection, we show that these policies are in fact incompatible with the self-interest of end-hosts, as these strategies make it worthwhile to pursue an oscillatory path-selection strategy. Therefore, stability in networks with selfish endhosts must be enforced by incentive-compatible mechanisms. We present two such mechanisms and formally prove their incentive compatibility.
Gayane Vardoyan, Saikat Guha, Philippe Nain, Don Towsley
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 45-50; https://doi.org/10.1145/3453953.3453963

Abstract:
We study a quantum switch serving a set of users in a star topology. The function of the switch is to create bipartite or tripartite entangled state among users at the highest possible rates at a fixed ratio. We model a set of randomized switching policies. Discovering that some are better than others, we present analytical results for the case where the switch stores one qubit per user, and find that the best policies outperform a time division multiplexing (TDM) policy for sharing the switch between bipartite and tripartite state generation. This performance improvement decreases as the number of users grows. The model is easily augmented to study the capacity region in the presence of qubit decoherence, obtaining similar results. Moreover, decoherence appears to have little effect on capacity. We also study a smaller class of policies when the switch stores two qubits per user.
Fehmina Malik, Manjesh K. Hanawal, Yezekael Hayel, Jayakrishnan Nair
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 111-112; https://doi.org/10.1145/3453953.3453978

Abstract:
Revenue sharing contracts between Content Providers (CPs) and Internet Service Providers (ISPs) can act as leverage for enhancing the infrastructure of the Internet. ISPs can be incentivised to make investments in network infrastructure that improve Quality of Service (QoS) for users if attractive contracts are negotiated between them and CPs. The idea here is that part of the revenue of CPs is shared with ISPs to invest in infrastructure improvement. We propose a model in which CPs (leaders) determine contracts, and an ISP (follower) reacts by strategically determining the infrastructure enhancement (effort) for each CP. Two cases are studied: (i) the ISP differentiates between the CPs and puts (potentially) a different level of efforts to improve the QoS of each CP, and (ii) the ISP does not differentiate between CPs and makes equal amount of effort for all the CPs. The last scenario can be viewed as neutral behavior by the ISP. Our analysis of optimal contracts shows that preference of CPs for the neutral and non-neutral regime depends on their monetizing power - CPs which can better monetize their demand tend to prefer non-neutral regime whereas the weaker CPs tend to prefer the neutral regime. Interestingly, ISP revenue, as well as social utility, are also found to be higher under the non-neutral regime. We then propose an intermediate regulatory regime that we call "soft-neutral", where efforts put by the ISP for all the CPs need not be equal same but the disparity is not wide. We show that the soft-neutral regime alleviates the loss in social utility in the neutral regime and the outcome further improves when CPs determine their contracts through bargaining.
Nitish K. Panigrahy, Prithwish Basu, Don Towsley, Ananthram Swami, Kin K. Leung
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 51-56; https://doi.org/10.1145/3453953.3453964

Abstract:
We consider a class of power of two choice based assignment policies for allocating users to servers, where both users and servers are located on a two-dimensional Euclidean plane. In this framework, we investigate the inherent tradeoff between the communication cost, and load balancing performance of different allocation policies. To this end, we first design and evaluate a Spatial Power of two (sPOT) policy in which each user is allocated to the least loaded server among its two geographically nearest servers sequentially. When servers are placed on a two-dimensional square grid, sPOT maps to the classical Power of two (POT) policy on the Delaunay graph associated with the Voronoi tessellation of the set of servers. We show that the associated Delaunay graph is 4-regular and provide expressions for asymptotic maximum load using results from the literature. For uniform placement of servers, we map sPOT to a classical balls and bins allocation policy with bins corresponding to the Voronoi regions associated with the second order Voronoi diagram of the set of servers. We provide expressions for the lower bound on the asymptotic expected maximum load on the servers and prove that sPOT does not achieve POT load balancing benefits. However, experimental results suggest the efficacy of sPOT with respect to expected communication cost. Finally, we propose two non-uniform server sampling based POT policies that achieve the best of both the performance metrics. Experimental results validate the effectiveness of our proposed policies.
Eitan Bachmat, Sveinung Erland
ACM SIGMETRICS Performance Evaluation Review, Volume 48, pp 12-14; https://doi.org/10.1145/3439602.3439608

Abstract:
We introduce some methods and concepts of optics into performance analysis and optimization
Page of 68
Articles per Page
by
Show export options
  Select all
Back to Top Top