Refine Search

New Search

Results in Journal Communications and Network: 455

(searched for: journal_id:(2256631))
Page of 10
Articles per Page
by
Show export options
  Select all
Keya Sen, Stan Ingman
Communications and Network, Volume 13, pp 12-24; doi:10.4236/cn.2021.131002

Abstract:
Healthcare monitoring and analysis of healthcare parameters is a reality to reduce costs and increase access to specialist and experts that holds the future for geriatric care in India. This paper proposes distinct methods towards the implementation of rural elder health information technologies (IT), which includes electronic medical records, clinical decision support, mobile medical applications, and software driven medical devices used in the diagnosis or treatment of disease for the older adult population in the villages of India. The purpose is online patient satisfaction at the microlevel (village panchayat) through methods accessible and affordable by establishing a common standard of operations at the village primary care units giving way to early disease detection and routine screening among the aged population avoiding institutionalization. The rural elder health IT framework is of great interest for all stakeholders in the field, as it benefits the investors and the consumers, adding to the technological infrastructure, thereby opening new avenues of research in health informatics, telemedicine and enhancing the scope of geriatric research, which in turn enhances the health-related quality of life for the rural older adults in the remote villages of the nation.
Zipeng Lin
Communications and Network, Volume 13, pp 1-11; doi:10.4236/cn.2021.131001

Abstract:
In this article, a physics aware deep learning model is introduced for multiphase flow problems. The deep learning model is shown to be capable of capturing complex physics phenomena such as saturation front, which is even challenging for numerical solvers due to the instability. We display the preciseness of the solution domain delivered by deep learning models and the low cost of deploying this model for complex physics problems, showing the versatile character of this method and bringing it to new areas. This will require more allocation points and more careful design of the deep learning model architectures and residual neural network can be a potential candidate.
Fontaine Rafamantanantsoa, Rabetafika Louis Haja, Randrianomenjanahary Lala Ferdinand
Communications and Network, Volume 13, pp 36-49; doi:10.4236/cn.2021.131004

Abstract:
In recent years, Internet exposure of applications continuously engenders new forms threats that can endanger the security of the entire system and raises many performance issues related to code security. The safety of information systems becomes essential. This is why the performance linked to security codes is of importance essential in the security systems of all companies. Indeed, as contribution, to carry out measurements, it appropriates tools that are the JMH tool (Java Microbenchmark Harness) and the PHP Benchmark script tool which include unsecure java and PHP codes and secured against SQL (Structured Query Language) injection, XSS (Cross Site Scripting) i.e., using prepared requests, stored procedures, validation of input from white lists, reinforcement of minimum privilege, when sending requests from the last ones to MySQL databases and Postgresql. We recover the times of response to his requests. From java codes and PHP (Hypertext Preprocessor) secure, we also retrieve the response time for requests to databases MySQL and PostgresqL data. We then obtain the curves and interpretations comparing performance related to security and non-security of codes. The goal is to analyze and evaluate the performance comparing secure Java and PHP code against unsecure java and PHP code using MySQL and Postgresql databases. In Section 1, we presented the performance of the code Java and PHP. The configuration of the experiments and the experimental results are discussed in Sections 2 and 3, respectively. Use of suitable tool which is the JMH tool and the PHP Benchmark script tool, we have developed in Java 1.8 and PHP 7.4 secure and non-secure codes that send the queries to the MySQL or Postgresql database to carry out the measurements which led to the conclusion that the insecure PHP and Java codes are faster in terms of response time compared to the PHP and Java secure codes as the number of tables linked to the query increases because the blocking times of SQL injection and XSS preventions linked to its secure codes are increasing.
Alese Boniface Kayode, Alowolodu Olufunso Dayo, Adekunle Adewale Uthman
Communications and Network, Volume 13, pp 68-78; doi:10.4236/cn.2021.132006

Abstract:
With the continuous use of cloud and distributed computing, the threats associated with data and information technology (IT) in such an environment have also increased. Thus, data security and data leakage prevention have become important in a distributed environment. In this aspect, mobile agent-based systems are one of the latest mechanisms to identify and prevent the intrusion and leakage of the data across the network. Thus, to tackle one or more of the several challenges on Mobile Agent-Based Information Leakage Prevention, this paper aim at providing a comprehensive, detailed, and systematic study of the Distribution Model for Mobile Agent-Based Information Leakage Prevention. This paper involves the review of papers selected from the journals which are published in 2009 and 2019. The critical review is presented for the distributed mobile agent-based intrusion detection systems in terms of their design analysis, techniques, and shortcomings. Initially, eighty-five papers were identified, but a paper selection process reduced the number of papers to thirteen important reviews.
Fontaine Rafamantanantsoa, Razafindramonja Clément Aubert, Rabetafika Louis Haja
Communications and Network, Volume 13, pp 25-35; doi:10.4236/cn.2021.131003

Abstract:
Currently, the increasing network connectivity is becoming more and more complex and the integration of several services continues to appear. These approaches need a guarantee in terms of throughput, the performance of which they potentially impact. So, the basic aim of this research is to get the best MPLS network performance towards linux and FreeBSD operating systems. This is why in this report, we have worked on MPLS network performance, which will help to identify performance metrics. To evaluate the performance, we used those operating systems implementing the MPLS architecture in order to define the best performing between the two on this domain. For this, we used scapy to measure the response times by varying the size of the packets sent and validate the measurements with the MATLAB Simulink. After all experiments, we realized that FreeBSD operating system is more reliable than linux in MPLS network base.
Nafisa Islam, Warsame H. Ali, Emmanuel S Kolawole, John Fuller, Pamela Obiomon, John O. Attia, Samir Abood
Communications and Network, Volume 13, pp 51-67; doi:10.4236/cn.2021.132005

Abstract:
In recent times, renewable energy production from renewable energy sources is an alternative way to fulfill the increased energy demands. However, the increasing energy demand rate places more pressure, leading to the termination of conventional energy resources. However, the cost of power generation from coal-fired plants is higher than the power generation’s price from renewable energy sources. This experiment is focused on cost optimization during power generation through pumped storage power plant and wind power plant. The entire modeling of cost optimization has been conducted in two parts. The mathematical modeling was done using MATLAB simulation while the hydro and wind power plant’s emulation was performed using SCADA (Supervisory control and data acquisition) designer implementation. The experiment was conducted using ranges of generated power from both power sources. The optimum combination of output power and cost from both generators is determined via MATLAB simulation within the assumed generated output power range. Secondly, the hydro-generator and wind generator’s emulation were executed individually through synchronizing the grid to determine each generator’s specification using SCADA designer, which provided the optimum power generation from both generators with the specific speed, aligning with results generated through MATLAB. Finally, the operational power cost (with no losses consideration) from MATLAB was compared with the local energy provider to determine the cost-efficiency. This experiment has provided the operational cost optimization of the hydro-wind combined power system with stable wind power generation using SCADA, which will ultimately assist in operations of large-scale power systems, remotely minimizing multi-area dynamic issues while maximizing the system efficiency.
Afsane Zahmatkesh, Chung-Horng Lung
Communications and Network, Volume 12, pp 199-219; doi:10.4236/cn.2020.124010

Abstract:
Solving the controller placement problem (CPP) in an SDN architecture with multiple controllers has a significant impact on control overhead in the network, especially in multihop wireless networks (MWNs). The generated control overhead consists of controller-device and inter-controller communications to discover the network topology, exchange configurations, and set up and modify flow tables in the control plane. However, due to the high complexity of the proposed optimization model to the CPP, heuristic algorithms have been reported to find near-optimal solutions faster for large-scale wired networks. In this paper, the objective is to extend those existing heuristic algorithms to solve a proposed optimization model to the CPP in software-defined multihop wireless networking (SDMWN).Our results demonstrate that using ranking degrees assigned to the possible controller placements, including the average distance to other devices as a degree or the connectivity degree of each placement, the extended heuristic algorithms are able to achieve the optimal solution in small-scale networks in terms of the generated control overhead and the number of controllers selected in the network. As a result, using extended heuristic algorithms, the average number of hops among devices and their assigned controllers as well as among controllers will be reduced. Moreover, these algorithms are able tolower the control overhead in large-scale networks and select fewer controllers compared to an extended algorithm that solves the CPP in SDMWN based on a randomly selected controller placement approach.
David de la Bastida, Fuchun Joseph Lin
Communications and Network, Volume 12, pp 122-154; doi:10.4236/cn.2020.123007

Abstract:
With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research proposes a dynamic SDN-based network slicing mechanism to tackle the scalability problems caused by such heterogeneity and fluctuation of IoT application requirements. The proposed method can automatically create a network slice on-the-fly for each new type of IoT application and adjust the QoS characteristics of the slice dynamically according to the changing requirements of an IoT application. Validated with extensive experiments, the proposed mechanism demonstrates better platform scalability when compared to a static slicing system.
Thomas Kunz, Silas Echegini, Babak Esfandiari
Communications and Network, Volume 12, pp 99-121; doi:10.4236/cn.2020.123006

Abstract:
We present an effective routing solution for the backbone of hierarchical MANETs. Our solution leverages the storage and retrieval mechanisms of a Distributed Hash Table (DHT) common to many (structured) P2P overlays. The DHT provides routing information in a decentralized fashion, while supporting different forms of node and network mobility. We split a flat network into clusters, each having a gateway who participates in a DHT overlay. These gateways interconnect the clusters in a backbone network. Two routing approaches for the backbone are explored: flooding and a new solution exploiting the storage and retrieval capabilities of a P2P overlay based on a DHT. We implement both approaches in a network simulator and thoroughly evaluate the performance of the proposed scheme using a range of static and mobile scenarios. We also compare our solution against flooding. The simulation results show that our solution, even in the presence of mobility, achieved well above 90% success rates and maintained very low and constant round trip times, unlike the flooding approach. In fact, the performance of the proposed inter-cluster routing solution, in many cases, is comparable to the performance of the intra-cluster routing case. The advantage of our proposed approach compared to flooding increases as the number of clusters increases, demonstrating the superior scalability of our proposed approach.
Sarabjeet Singh, Marc St-Hilaire
Communications and Network, Volume 12, pp 74-97; doi:10.4236/cn.2020.122005

Abstract:
In a cloud computing environment, users using the pay-as-you-go billing model can relinquish their services at any point in time and pay accordingly. From the perspective of the Cloud Service Providers (CSPs), this is not beneficial as they may lose the opportunity to earn from the relinquished resources. Therefore, this paper tackles the resource assignment problem while considering users relinquishment and its impact on the net profit of CSPs. As a solution, we first compare different ways to predict user behavior (i.e. how likely a user will leave the system before its scheduled end time) and deduce a better prediction technique based on linear regression. Then, based on the RACE (Relinquishment-Aware Cloud Economics) model proposed in [1], we develop a relinquishment-aware resource optimization model to estimate the amount of resources to assign on the basis of predicted user behavior. Simulations performed with CloudSim show that cloud service providers can gain more by estimating the amount of resources using better prediction techniques rather than blindly assigning resources to users. They also show that the proposed prediction-based resource assignment scheme typically generates more profit for a lower or similar utilization.
Mohammed Banu Ali, Trevor Wood-Harper, Abdullah Sultan Al-Qahtani, Abubakar Mohamed Ali Albakri
Communications and Network, Volume 12, pp 41-60; doi:10.4236/cn.2020.122003

Abstract:
Although there have been remarkable technological developments in healthcare, the privacy and security of mobile health systems (mHealth) still raise many concerns with considerable consequences for patients using these technologies. For instance, potential security and privacy threats in wireless devices, such as Wi-Fi and Bluetooth connected to a patient hub at the application, middleware and sensory layers, may result in the disclosure of private and sensitive data. This paper explores the security and privacy of the patient hub, including patient applications and their connections to sensors and cloud technology. Addressing the privacy and security concerns of the patient hub called for a comprehensive risk assessment by using the OCTAVE risk assessment framework. Findings reveal that the highest risk concerned data exposure at the sensory layer. In spite of the countermeasures presented in this paper, most served as a means to identify risk early as opposed to mitigating them. The findings can serve to inform users of the potential vulnerabilities in the patient hub before they arise.
Gefei Zhu
Communications and Network, Volume 12, pp 174-198; doi:10.4236/cn.2020.124009

Abstract:
Building an automatic seizure onset prediction model based on multi-channel electroencephalography (EEG) signals has been a hot topic in computer science and neuroscience field for a long time. In this research, we collect EEG data from different epilepsy patients and EEG devices and reconstruct and combine the EEG signals using an innovative electric field encephalography (EFEG) method, which establishes a virtual electric field vector, enabling extraction of electric field components and increasing detection accuracy compared to the conventional method. We extract a number of important features from the reconstructed signals and pass them through an ensemble model based on support vector machine (SVM), Random Forest (RF), and deep neural network (DNN) classifiers. By applying this EFEG channel combination method, we can achieve the highest detection accuracy at 87% which is 6% to 17% higher than the conventional channel averaging combination method. Meanwhile, to reduce the potential overfitting problem caused by DNN models on a small dataset and limited training patient, we ensemble the DNN model with two “weaker” classifiers to ensure the best performance in model transferring for different patients. Based on these methods, we can achieve the highest detection accuracy at 82% on a new patient using a different EEG device. Thus, we believe our method has good potential to be applied on different commercial and clinical devices.
Khalid S. Aloufi, Omar H. Alhazmi
Communications and Network, Volume 12, pp 155-173; doi:10.4236/cn.2020.124008

Abstract:
IoT applications are promising for future daily activities; therefore, the number of IoT connected devices is expected to reach billions in the coming few years. However, IoT has different application frameworks. Furthermore, IoT applications require higher security standards. In this work, an IoT application framework is presented with a security embedded structure using the integration between message queue telemetry transport (MQTT) and user-managed access (UMA). The performance analysis of the model is presented. Comparing the model with existing models and different design structures shows that the model presented in this work is promising for a functioning IoT design model with security. The security in the model is a built-in feature in its structure. The model is built on recommended frameworks; therefore, it is ready for integration with other web standards for data sharing, which will help in making IoT applications integrated from different developing parties.
Khaled Alghamdi,
Communications and Network, Volume 12, pp 28-40; doi:10.4236/cn.2020.121002

Abstract:
The world is moving at a high speed in the implementation and innovations of new systems and gadgets. 3G and 4G networks support currently wireless network communications. However, the networks are deemed to be slow and fail to receive signals or data transmission to various regions as a result of solving the problem. This paper will analyze the use of Software Defined Network (SDN) in a 5G (fifth generation) network that can be faster and reliable. Further, in Mobile IP, there exist triangulation problems between the sending and receiving nodes along with latency issues during handoff for the mobile nodes causing huge burden in the network. With Cloud Computing and ecosystem for Virtualization developed for the Core and Radio Networks SDN OpenFlow seems to be a seamless solution for determining signal flow between mobiles. There have been a lot of researches going on for deploying SDN OpenFlow with the 5G Cellular Network. The current paper performs benchmarks as a feasibility need for implementing SDN OpenFlow for 5G Cellular Network. The Handoff mechanism impacts the scalability required for a cellular network and simulation results can be further used to be deployed the 5G Network.
Gloria A. Chukwudebe, Emmanuel N. Ekwonwune, Florence O. Elei
Communications and Network, Volume 12, pp 61-73; doi:10.4236/cn.2020.122004

Abstract:
Performance evaluation is essential in maintaining the Quality of Service (QOS) of the Wideband Code Division Multiplexing Access (WCDMA). This work was motivated by the reception of the poor signals, increase call drop, failure rate which was a poor QoS Reception. The aim is to survey WCDMA services in Owerri environs and establish that there are degradation and the level of debasement in the network. The methodology involved an Empirical Analysis through Drive Test across Owerri City in Imo State. The work adopted the empirical approach and deduction of some Standard performance metrics like call drop rate, failure rate, call success rate, call completion rate, Handover success rate and handover Failure Rate, compare with expected KPI(key performance indicator) threshold. From the assessment, it was found that only one out Four Networks (“GLO”) met the target Call Drop Rate (CDR), Call Completion Success rate (CCSR), Call Setup Success Rate (CSSR) and Call Blocked Rate (CBR) and the Handover was better in “GLO” and 9 mobile than in the “MTN” and Airtel.
Lama Y. Hosni, Ahmed Y. Farid, Abdelrahman A. Elsaadany, Mahammad A. Safwat
Communications and Network, Volume 12, pp 1-27; doi:10.4236/cn.2020.121001

Abstract:
The fifth generation (5G) New Radio (NR) has been developed to provide significant improvements in scalability, flexibility, and efficiency in terms of power usage and spectrum as well. To meet the 5G vision, service and performance requirements, various candidate technologies have been proposed in 5G new radio; some are extensions of 4G and, some are developed explicitly for 5G. These candidate technologies include non-Orthogonal Multiple Access (NOMA), and Low Density Parity Check (LDPC) channel coding. In addition, deploying software defined radio (SDR) instead of traditional hardware modules. In this paper we build an open source SDR-based platform to realize the transceiver of the physical downlink shared channel (PDSCH) of 5G NR according to Third Generation Partnership Project (3GPP) standard. We provide a prototype for pairing between two 5G users using NOMA technique. In addition, a suitable design for LDPC channel coding is performed. The intermediate stage of segmentation, rate matching and interleaving are also carried out in order to realize a standard NR frame. Finally, experiments are carried out in both simulation and real time scenario on the designed 5G NR for the purpose of system performance evaluation, and to demonstrate its potential in meeting future 5G mobile network challenges.
Lin Liao, Zhen Jia, Yang Deng
Communications and Network, Volume 11, pp 21-34; doi:10.4236/cn.2019.111003

Abstract:
With the rapid development of big data, the scale of realistic networks is increasing continually. In order to reduce the network scale, some coarse-graining methods are proposed to transform large-scale networks into mesoscale networks. In this paper, a new coarse-graining method based on hierarchical clustering (HCCG) on complex networks is proposed. The network nodes are grouped by using the hierarchical clustering method, then updating the weights of edges between clusters extract the coarse-grained networks. A large number of simulation experiments on several typical complex networks show that the HCCG method can effectively reduce the network scale, meanwhile maintaining the synchronizability of the original network well. Furthermore, this method is more suitable for these networks with obvious clustering structure, and we can choose freely the size of the coarse-grained networks in the proposed method.
Plouton V. Grammatikos, Panayotis G. Cottis
Communications and Network, Volume 11, pp 65-81; doi:10.4236/cn.2019.113006

Abstract:
This paper explores the exploitation of Mobile/Multi-access Edge Computing (MEC) for Vehicle-to-Everything (V2X) communications. Certain V2X applications that aim at improving road safety require reliable and low latency message delivery. As the number of connected vehicles increases, these requirements cannot be satisfied by technologies relying on the IEEE 802.11p standard. Therefore, the exploitation of the 4th generation Long Term Evolution (LTE) mobile networks has been considered. However, despite their widespread use, LTE systems are characterized by high end-to-end latency since the messages have to traverse the core network. MEC addresses this problem by offering computing, storage and network resources at the edge of the network closer to the end-users. This paper aims at investigating the benefits MEC may offer toward implementing V2X communications. In this framework, simulation scenarios were examined concerning various V2X use cases implemented employing either LTE or MEC. The simulation results indicate a clear superiority of MEC over LTE, especially in the case of delivering critical data.
Nguyen Cong Dinh
Communications and Network, Volume 11, pp 52-63; doi:10.4236/cn.2019.112005

Abstract:
Control systems are being changed from wired to wireless communications because of flexibility, mobility and extensibility of wireless communication systems; however the reliability of wireless communications is suspected. In this paper, we propose cooperative communication scheme for wireless control systems which consist of a controller and multiple machines; these machines cooperatively work in a group and for the same duty. In the proposed method, the controller can communicate with machines directly or via other machines, whereas in the conventional method, the controller only communicates with machines directly. The simple 2-link arm plant is used to evaluate our proposed system, and the simulation results indicate that the proposed method is more accurate, and more stable than the conventional method.
Yang Deng, Zhen Jia, Lin Liao
Communications and Network, Volume 11, pp 35-51; doi:10.4236/cn.2019.112004

Abstract:
Multilayer network is a frontier direction of network science research. In this paper, the cluster ring network is extended to a two-layer network model, and the inner structures of the cluster blocks are random, small world or scale-free. We study the influence of network scale, the interlayer linking weight and interlayer linking fraction on synchronizability. It is found that the synchronizability of the two-layer cluster ring network decreases with the increase of network size. There is an optimum value of the interlayer linking weight in the two-layer cluster ring network, which makes the synchronizability of the network reach the optimum. When the interlayer linking weight and the interlayer linking fraction are very small, the change of them will affect the synchronizability.
Takuma Jogan, Tomofumi Matsuzawa, Masayuki Takeda
Communications and Network, Volume 11, pp 1-10; doi:10.4236/cn.2019.111001

Abstract:
In recent years, opportunities for using cloud services as computing resources have increased and there is a concern that private information may be leaked when processes data. The data processing while maintaining confidentiality is called secret computation. Cryptosystems can add and multiply plaintext through the manipulation of ciphertexts of homomorphic cryptosystems, but most of them have restrictions on the number of multiplications that can be performed. Among the different types of cryptosystems, fully homomorphic encryption can perform arbitrary homomorphic addition and multiplication, but it takes a long time to eliminate the limitation on the number of homomorphic operations and to carry out homomorphic multiplication. Therefore, in this paper, we propose an arithmetic processing method that can perform an arbitrary number of homomorphic addition and multiplication operations based on ElGamal cryptosystem. The results of experiments comparing with the proposed method with HElib in which the BGV scheme of fully homomorphic encryption is implemented showed that, although the processing time for homomorphic addition per ciphertext increased by about 35%, the processing time for homomorphic multiplication was reduced to about 1.8%, and the processing time to calculate the statistic (variance) had approximately a 15% reduction.
Jaya V. Gaitonde, Rajesh B. Lohani
Communications and Network, Volume 11, pp 83-117; doi:10.4236/cn.2019.114007

Abstract:
The ultraviolet (UV) photoresponses of Wurtzite GaN, ZnO, and 6H-SiC-based Optical Field Effect Transistor (OPFET) detectors are estimated with an in-depth analysis of the same considering the generalized model and the front-illuminated model for high resolution imaging and UV communication applications. The gate materials considered for the proposed study are gold (Au) and Indium-Tin-Oxide (ITO) for GaN, Au for SiC, and Au and silver dioxide (AgO2) for ZnO. The results indicate significant improvement in the Linear Dynamic Range (LDR) over the previously investigated GaN OPFET (buried-gate, front-illuminated and generalized) models with Au gate. The generalized model has superior dynamic range than the front-illuminated model. In terms of responsivity, all the models including buried-gate OPFET exhibit high and comparable photoresponses. Buried-gate devices on the whole, exhibit faster response than the surface gate models except in the AgO2-ZnO generalized OPFET model wherein the switching time is the lowest. The generalized model enables faster switching than the front-illuminated model. The switching times in all the cases are of the order of nanoseconds to picoseconds. The SiC generalized OPFET model shows the highest 3-dB bandwidths of 11.88 GHz, 36.2 GHz, and 364 GHz, and modest unity-gain cut-off frequencies of 4.62 GHz, 8.71 GHz, and 5.71 GHz at the optical power densities of 0.575 μW/cm2, 0.575 mW/cm2, and 0.575 W/cm2 respectively. These are in overall, the highest detection-cum-amplifi-cation bandwidths among all the investigated devices. The same device exhibits the highest LDR of 73.3 dB. The device performance is superior to most of the other existing detectors along with comparable LDR, thus, emerging as a high performance photodetector for imaging and communication applications. All the detectors show considerably high detectivities owing to the high responsivity values. The results have been analyzed by the photovoltaic and the photoconductive effects, and the series resistance effects and will aid in conducting further research. The results are in line with the experiments and the commercially available software simulations. The devices will greatly contribute towards single photon counting, high resolution imaging, and UV communication applications.
Wafa Alsharafat
Communications and Network, Volume 11, pp 11-20; doi:10.4236/cn.2019.111002

Abstract:
Due to the ever growing number of cyber attacks, especially of the online systems, development and operation of adaptive Intrusion Detection Systems (IDSs) is badly needed so as to protect these systems. It remains as a goal of paramount importance to achieve and a serious challenge to address. Different selection methods have been developed and implemented in Genetic Algorithms (GAs) to enhance the rate of detection of the IDSs. In this respect, the present study employed the eXtended Classifier System (XCS) for detection of intrusions by matching the incoming environmental message (packet) with a classifiers pool to determine whether the incoming message is a normal request or an intrusion. Fuzzy Clustering by Local Approximation Membership (FLAME) represents the new selection method used in GAs. In this study, Genetic Algorithm with FLAME selection (FGA) was used as a production engine for the XCS. For comparison purposes, different selection methods were compared with FLAME selection and all experiments and evaluations were performed by using the KDD’99 dataset.
Chengwen Jiao, Qi Feng, Weichun Bu
Communications and Network, Volume 10, pp 1-10; doi:10.4236/cn.2018.101001

Abstract:
In this paper, we mainly consider the complexity of the k-splittable flow minimizing congestion problem. We give some complexity results. For the k-splittable flow problem, the existence of a feasible solution is strongly NP-hard. When the number of the source nodes is an input, for the uniformly exactly k-splittable flow problem, obtaining an approximation algorithm with performance ratio better than (√5+1)/2 is NP-hard. When k is an input, for single commodity k-splittable flow problem, obtaining an algorithm with performance ratio better than is NP-hard. In the last of the paper, we study the relationship of minimizing congestion and minimizing number of rounds in the k-splittable flow problem. The smaller the congestion is, the smaller the number of rounds.
Zijiang Zhu
Communications and Network, Volume 10, pp 105-116; doi:10.4236/cn.2018.103009

Abstract:
Collaborative filtering algorithm is the most widely used and recommended algorithm in major e-commerce recommendation systems nowadays. Concerning the problems such as poor adaptability and cold start of traditional collaborative filtering algorithms, this paper is going to come up with improvements and construct a hybrid collaborative filtering algorithm model which will possess excellent scalability. Meanwhile, this paper will also optimize the process based on the parameter selection of genetic algorithm and demonstrate its pseudocode reference so as to provide new ideas and methods for the study of parameter combination optimization in hybrid collaborative filtering algorithm.
Emmanuel N. Ekwonwune, Chukwuma D. Anyiam, Oliver E. Osuagwu
Communications and Network, Volume 10, pp 117-125; doi:10.4236/cn.2018.103010

Abstract:
This research work aims at modelling a framework for Private Cloud infrastructure Deployment for Information and Communication Technology Centres (ICTs) in tertiary institutions in Nigeria. Recent researches have indicated that cloud computing will become the mainstream in computing technology and very effective for businesses. All Tertiary Institutions have ICT units, and are generally charged with the responsibilities of deploying ICT infrastructure and services for administration, teaching, research and learning in higher institution at large. The Structured System Analysis and Design Methodology (SSADM) is used in this research and a six-step framework for a cost effective and scalable Private cloud infrastructure using server virtualization is presented as an alternative that can guarantee total and independent control of data flow in the institutions, while ensuring adequate security of vital information.
Lei Wang, Huayang Feng, Li Lin, Li Du
Communications and Network, Volume 10, pp 65-77; doi:10.4236/cn.2018.103006

Abstract:
Designing an excellent original topology not only improves the accuracy of routing, but also improves the restoring rate of failure. In this paper, we propose a new heuristic topology generation algorithm—GA-PODCC (Genetic Algorithm based on the Pareoto Optimality of Delay, Configuration and Consumption), which utilizes a genetic algorithm to optimize the link delay and resource configuration/consumption. The novelty lies in designing the two stages of genetic operation: The first stage is to pick the best population by means of the crossover, mutation, and selection operation; The second stage is to select an excellent individual from the best population. The simulation results show that, using the same number of nodes, GA-PODCC algorithm improves the balance of all the three optimization objectives, maintaining a low level of distortion in topology aggregation.
Bamidele Moses Kuboye
Communications and Network, Volume 10, pp 152-163; doi:10.4236/cn.2018.104013

Abstract:
Long Term Evolution (LTE) is designed to revolutionize mobile broadband technology with key considerations of higher data rate, improved power efficiency, low latency and better quality of service. This work analyzes the impact of resource scheduling algorithms on the performance of LTE (4G) and WCDMA (3G) networks. In this paper, a full illustration of LTE system is given together with different scheduling algorithms. Thereafter, 3G WCDMA and 4G LTE networks were simulated using Simulink simulator embedded in MATLAB and performance evaluations were carried out. The performance metrics used for the evaluations are average system throughput, packet delay, latency and allocation of fairness using Round Robin, Best CQI and Proportional fair Packet Scheduling Algorithms. The results of the evaluations on both networks were analysed. The results showed that 4G LTE network performs better than 3G WCDMA network in all the three scheduling algorithms used.
Fontaine Rafamantanantsoa, Maherindefo Laha
Communications and Network, Volume 10, pp 142-151; doi:10.4236/cn.2018.104012

Abstract:
The purpose of this study is to analyze and then model, using neural network models, the performance of the Web server in order to improve them. In our experiments, the parameters taken into account are the number of instances of clients simultaneously requesting the same Web page that contains the same SQL queries, the number of tables queried by the SQL, the number of records to be displayed on the requested Web pages, and the type of used database server. This work demonstrates the influences of these parameters on the results of Web server performance analyzes. For the MySQL database server, it has been observed that the mean response time of the Web server tends to become increasingly slow as the number of client connection occurrences as well as the number of records to display increases. For the PostgreSQL database server, the mean response time of the Web server does not change much, although there is an increase in the number of clients and/or size of information to be displayed on Web pages. Although it has been observed that the mean response time of the Web server is generally a little faster for the MySQL database server, it has been noted that this mean response time of the Web server is more stable for PostgreSQL database server.
Agbotiname L. Imoize, Taiwo Oyedare, Michael E. Otuokere,
Communications and Network, Volume 10, pp 211-229; doi:10.4236/cn.2018.104017

Abstract:
In this paper, we consider a cost-based extension of intrusion detection capability (CID). An objective metric motivated by information theory is presented and based on this formulation; a package for computing the intrusion detection capability of intrusion detection system (IDS), given certain input parameters is developed using Java. In order to determine the expected cost at each IDS operating point, the decision tree method of analysis is employed, and plots of expected cost and intrusion detection capability against false positive rate were generated. The point of intersection between the maximum intrusion detection capability and the expected cost is selected as the optimal operating point. Considering an IDS in the context of its intrinsic ability to detect intrusions at the least expected cost, findings revealed that the optimal operating point is the most suitable for the given IDS. The cost-based extension is used to select optimal operating point, calculate expected cost, and compare two actual intrusion detectors. The proposed cost-based extension of intrusion detection capability will be very useful to information technology (IT), telecommunication firms, and financial institutions, for making proper decisions in evaluating the suitability of an IDS for a specific operational environment.
Pauline Oghenekaro Adeniran, Uloma Doris Onuoha
Communications and Network, Volume 10, pp 164-179; doi:10.4236/cn.2018.104014

Abstract:
The introduction of new technologies has had a significant influence on teaching, learning and research activities in universities. This has offered university libraries opportunities to provide information resources in a variety of formats. This study investigated the influence of information literacy skills on postgraduate students’ use of electronic resources in private university libraries in Nigeria. The study adopted the survey research design. The study population comprised 2805 postgraduate students in five private universities offering postgraduate programmes in South-West, Nigeria. Multistage sampling technique was used in the selection process. A purposive selection of four faculties from each of the five universities was carried out. Proportionate sampling technique was used to select the sample size of 550 postgraduate students as the respondents for the study. Findings revealed that there was a significant positive correlation between information literacy skills and use of electronic resources (r = 0.28, p < 0.05). The study concluded that the utilization of electronic resources promoted access to current information among postgraduate students in the selected private universities in South-West, Nigeria. The study recommended that the management of private university libraries should ensure a continuous provision of electronic resources with adequate information communication technology tools to facilitate their use.
Emmanuel Nwabueze Ekwonwune, Nwachukwu Catherine Ada Ngozi, Osuagwu Oliver Eberechi
Communications and Network, Volume 10, pp 43-50; doi:10.4236/cn.2018.103004

Abstract:
Road Traffic monitoring involves the collection of data describing the characteristic of vehicles and their movement through road networks. Such data may be used for one of these purposes such as law enforcement, congestion and incident detection and increasing road capacity. Transportation is a requirement for every nation regardless of its economy, political stability, population size and technological development. Movement of goods and people from one place to another is crucial to maintain strong economic and political ties between the various components of any given nation among nations. However, there are different modes of transportation and the most paramount one to human beings is road transportation. Due to increase in the modes of transportation, road users encounter different problems such as road blockage and incidents. Therefore there is need to monitor users incidents and to know the causes. Road traffic monitoring can be done manually or using ICT devices. This paper focuses on how the use of ICT devices can enhance road traffic monitoring. It traces the brief history of transportation; it equally discussed road traffic and safety, tools for monitoring road traffic, Intelligent Transportation Systems (ITS) use for traffic monitoring and their benefits. The result shows that the use of ICT devices in road traffic monitoring should be a Millennium Goal for all developed and developing countries because of its numerous advantages in the reduction of the intensity of traffic and other road incidents.
Yingying Wang, , Lang Zeng
Communications and Network, Volume 10, pp 51-64; doi:10.4236/cn.2018.103005

Abstract:
Coarse graining of complex networks is an important method to study large-scale complex networks, and is also in the focus of network science today. This paper tries to develop a new coarse-graining method for complex networks, which is based on the node similarity index. From the information structure of the network node similarity, the coarse-grained network is extracted by defining the local similarity and the global similarity index of nodes. A large number of simulation experiments show that the proposed method can effectively reduce the size of the network, while maintaining some statistical properties of the original network to some extent. Moreover, the proposed method has low computational complexity and allows people to freely choose the size of the reduced networks.
Emmanuel A. Kondela, , Joseph W. Matiko, Julianne S. Otim,
Communications and Network, Volume 10, pp 78-92; doi:10.4236/cn.2018.103007

Abstract:
We present a problem for benchmarking the robustness of cellular up-links, in an automatic weather station (AWS) testbed. Based on the problem, we conduct a small-scale measurement study of robustness, where the AWS is equipped with four (4) cellular modems for weather data delivery. The effectiveness of up-links is challenging because of overlapping spatial-temporal factors such as the presence of good reflectors that lead to multi-path effects, interference, network load or other reasons. We argue that, there is a strong need for independent assessments of their robustness, to perform end-to-end network measurement. However, it is yet difficult to go from a particular measurement to an assessment of the entire network. We extensively measure the variability of Radio Signal Strength (RSSI) as link metric on the cellular modems. The RSSI is one of the important link metrics that can determine the robustness of received RF signals, and explore how they differed from one another at a particular location and instant time. We also apply the statistical analysis that quantifies the level of stability by considering the robustness, referring short-term variation, and determines good up-link in comparison to weak one. The results show that the robustness of cellular up-links exists for an unpredictable period of time and lower than one could hope. More than 50% of up-links are intermittent. Therefore, we plan to extend our work by exploring RSSI thresholds, to develop a classification scheme supporting a decision whether a link is either intermittent or not. This will help in normalizing the level of stability, to design the RSSI estimation metric for the robust routing protocol in weather data networks.
Hani Attar, Mohammad Alhihi, Mohammad Samour, Ahmed A. A. Solyman, Shmatkov Sergiy Igorovich, Kuchuk Nina Georgievna, Fawaz Khalil
Communications and Network, Volume 10, pp 31-42; doi:10.4236/cn.2018.102003

Abstract:
The optimal load distribution over a set of independent paths for Multi-Protocol Label Switching for Traffic Engineering (MPLS-TE) networks is regarded as important issue; accordingly, this paper has developed a mathematical method of optimal procedures for choosing the shortest path. As a criterion for choosing the number of paths, a composite criterion is used that takes into account several parameters such as total path capacity and maximum path delay. The mathematical analysis of using the developed method is carried out, and the simulation results show that there are a limited number of the most significant routes that can maximize the composite quality of service indicator, which depends on the network connectivity and the amount of the traffic. The developed technological proposals allow increasing the utilization factor of the network by 20%.
, , , Dimitrios Dimopoulos, Vasilis Raptis, Evaggelos C. Karvounis, Pantelis Angelidis,
Communications and Network, Volume 10, pp 11-29; doi:10.4236/cn.2018.101002

Abstract:
The necessity of lowering the mean power consumption of various facilities, due to the lack of their enormous future energy needs, led to an ongoing advance of various technologies. These technologies have been oriented towards the concept of a Reduced Ecological Footprint. Massive structures (such as building complexes and hospitals) have been redesigned and upgraded; many interior designs have been dramatically altered while new electronic devices are constantly being produced in order to revolutionize a long term perspective towards a “Green Planet” while they exhibit astonishing signal processing. Consequently, an enormous technology already exists which needs to be properly combined to a proposed methodology and to new ideas relevant to systems’ administration through automatic wireless control. This paper intends to reduce the gap between design and realization of the aforementioned research. Consequently, the primary contribution of this research is the proposal of a complete design protocol with minimized defects relevant to Reduced Ecological Footprints of Facilities (REFF) along with its beneficial advantages relevant to providing a healthy and productive work environment. This protocol consists of four main parts which are 1) the main key points-guidelines, 2) its objectives, 3) the know-how methodology for implementation in existing installations and 4) the description of the imminent benefits in workforce/human resources.
Dileep Kumar Sajnani, Abdul Rasheed Mahesar, Abdullah Lakhan, Irfan Ali Jamali
Communications and Network, Volume 10, pp 127-141; doi:10.4236/cn.2018.104011

Abstract:
In a traditional Mobile Cloud Computing (MCC), a stream of data produced by mobile users (MUs) is uploaded to the remote cloud for additional processing throughout the Internet. Though, due to long WAN distance it causes high End to End latency. With the intention of minimize the average response time and key constrained Service Delay (network and cloudlet Delay) for mobile users (MUs), offload their workloads to the geographically distributed cloudlets network, we propose the Multi-layer Latency Aware Workload Assignment Strategy (MLAWAS) to allocate MUs workloads into optimal cloudlets, Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with two other existing strategies.
Fontaine Rafamantanantsoa, Haja Louis Rabetafika
Communications and Network, Volume 10, pp 180-195; doi:10.4236/cn.2018.104015

Abstract:
In recent years, the number of users connected to the Internet has experienced a phenomenal growth. The security of systems and networks become essential. That is why the performance of Linux firewall and Berkeley Software Distribution (BSD) are of paramount importance in security systems and networks in all businesses. The following evaluates the firewall based tool that we have developed in Python and Scapy, which performs time measurements by serving packets traversing the firewall test. Several results were presented: the speed of the firewall under FreeBSD in terms of service time compared to the speed of the firewall under Linux as the number of rules increases; the speed of the filtering rule of a firewall stateless in terms of service time compared to the filtering rule of an active firewall gradually as the number of rules increases. Then, for care of simplicity, we have presented the queue M/M/1/K to model the performances of firewalls. The resulting model was validated using Simulink and mean squared error. The analytical model and Simulink of the firewalls are presented in the article.
Fontaine Rafamantanantsoa, Paulson Ravomampiandra
Communications and Network, Volume 10, pp 196-210; doi:10.4236/cn.2018.104016

Abstract:
In recent years, web technology carried on growing and at same time, the number of internet users increased significantly in number. Today, a Web server is capable of processing millions of requests per day, but during the peak period may collapse and becomes critical causing unavailability of the services offered by the servers. That is why Web server performance is a topic of great interest to many researchers. In this paper, we evaluate experimentally the impact of JSP and PHP dynamic content technology: JSP and PHP with access to a database of performance data of Apache Web server. Using the “ApacheBench” performance measurement tool, the approach is to compare the performances of four different configurations of a Web server, such as: Apache Web server implementing JSP technology with access to PostgreSQL database, Apache using PHP technology with the PostgreSQL as database, Apache Web server using the JSP technology with access to MySQL database, finally Apache and PHP with DBMS MySQL. At the end of this article, we also present a Simulink model of Web server performance based on the simple M/M/1 queue. During the modeling, the MATLAB software was used.
Lang Zeng, , Yingying Wang
Communications and Network, Volume 10, pp 93-104; doi:10.4236/cn.2018.103008

Abstract:
Recently, some coarse-graining methods based on network synchronization have been proposed to reduce the network size while preserving the synchronizability of the original network. In this paper, we investigate the effects of the coarse graining process on synchronizability over complex networks under different average path lengths and different degrees of distribution. A large amount of experiments demonstrate a close correlation between the average path length, the heterogeneity of the degree distribution and the ability of spectral coarse-grained scheme in preserving the network synchronizability. We find that synchronizability can be well preserved in spectral coarse-grained networks when the considered networks have a longer average path length or a larger degree of variance.
Onur Berkay Gamgam, Erdinc Levent Atilgan
Communications and Network, Volume 09, pp 89-100; doi:10.4236/cn.2017.91005

Abstract:
In time division multiple access (TDMA) communication systems, correctly estimating the synchronization parameters is very important for reliable data transfer. The algorithms used for frequency/phase and symbol timing estimates are generally accepted as knowing the start of signal (SoS) parameter. Therefore, within these parameters, the SoS parameter is of particularly great importance. In this study, a reduced version of the SoS estimation algorithm introduced by Hosseini and Perrins is presented to estimate SoS for Gaussian Minimum Shift Keying (GMSK) modulated signals in burst format over additive white Gaussian noise (AWGN) channels. The reduced algorithm can be implemented on FPGA by using half the number of complex multipliers that would be required by the double correlation method and is robust to carrier frequency/phase errors. Simulations performed under 0.1 normalized frequency offset conditions show that the proposed algorithm has a probability of false lock which is less than 7×10-2, even at 0 dB SNR level.
Ahmed Redha Mahlous,
Communications and Network, Volume 09, pp 54-70; doi:10.4236/cn.2017.91003

Abstract:
In recent years, we have seen an increasing interest in developing and designing Wireless Sensor Networks (WSNs). WSNs consist of large number of nodes, with wireless communications and computation abilities that can be used in variety of domains. It has been used in areas that have direct contact with monitoring and gathering data, to name few, health monitoring, military surveillance, geological monitoring (Earthquakes, Volcanoes, Tsunami), agriculture control and many more. However, the design and implementation of WSNs face many challenges, due to the power limitation of sensor nodes, deployment and localization, data routing and data aggregation, data security, limited bandwidth, storage capacity and network management. It is known that Operation Research (OR) has been widely used in different areas to solve optimization problems; such as improving network performance and maximizing lifetime of system. In this survey, we present the most recent OR based techniques applied to solve different WSNs problems: the node scheduling problem, energy management problems, nodes allocating issues and other WSNs related complex problems. Different Operational Research techniques are presented and discussed in details here, including graph theory based techniques, linear programing and mixed integer programming related approaches.
Ekwonwune Emmanuel Nwabueze, Iwuoha Obioha, Oju Onuoha
Communications and Network, Volume 06, pp 172-178; doi:10.4236/cn.2017.63012

Abstract:
Most network service providers like MTN Nigeria, currently use two-factor authentication for their 4G wireless networks. This exposes the network subscribers to identify theft and users data to security threats like snooping, sniffing, spoofing and phishing. There is need to curb these problems with the use of an enhanced multi-factor authentication approach. The objective of this work is to create a multi-factor authentication software for a 4G wireless network. Multi-factor authentication involves user’s knowledge factor, user’s possession factor and user’s inherence factor; that is who the user is to be presented before system access can be granted. The research methodologies used for this work include Structured System Analysis and Design Methodology, SSADM and Prototyping. The result of this work will be a Multi-factor authentications software. This software was designed with programming languages like ASP. NET, C# and Microsoft SQL Server for the database.
Ibukunoluwa Adetutu Adebanjo, Yekeen Olajide Olasoji, Michael Olorunfunmi Kolawole
Communications and Network, Volume 06, pp 164-171; doi:10.4236/cn.2017.63011

Abstract:
Orthogonal Frequency Division Multiplexing (OFDM) is readily employed in wireless communication to combat the intersymbol interference (ISI) effect with limited success because as the capacity of MIMO systems increases, other destructive effects affect the propagation channels and/or overall system performance. As such, research interest has increased, on how to improve performance in the mediums where fading and ISI permeate, working on several combinatorial techniques to achieving improved effective throughput. In this study, we propose a combined model of the Space-Time Trellis Code (STTC) and Single-Carrier Frequency Domain Equalization (SC-FDE) to mitigate multiple-fading and interference effects. We present analytical performance results for the combined model over spatially correlated Rayleigh fading channels. We also show that it is beneficial to combine coding with equalization at the system’s receiving-end ensuring overall performance: a better performance over the traditional space-time trellis codes.
Yongshang Long, Zhen Jia
Communications and Network, Volume 09, pp 111-123; doi:10.4236/cn.2017.92007

Abstract:
In this paper, we propose a novel neighbor-preferential growth (NPG) network model. Theoretical analysis and numerical simulations indicate the new model can reproduce not only a scale-free degree distribution and its power exponent is related to the edge-adding number m, but also a small-world effect which has large clustering coefficient and small average path length. Interestingly, the clustering coefficient of the model is close to that of globally coupled network, and the average path length is close to that of star coupled network. Meanwhile, the synchronizability of the NPG model is much stronger than that of BA scale-free network, even stronger than that of synchronization-optimal growth network.
Ibukunoluwa Adetutu Adebanjo, Yekeen Olajide Olasoji, Michael Olorunfunmi Kolawole
Communications and Network, Volume 09, pp 164-171; doi:10.4236/cn.2017.93011

Abstract:
Orthogonal Frequency Division Multiplexing (OFDM) is readily employed in wireless communication to combat the intersymbol interference (ISI) effect with limited success because as the capacity of MIMO systems increases, other destructive effects affect the propagation channels and/or overall system performance. As such, research interest has increased, on how to improve performance in the mediums where fading and ISI permeate, working on several combinatorial techniques to achieving improved effective throughput. In this study, we propose a combined model of the Space-Time Trellis Code (STTC) and Single-Carrier Frequency Domain Equalization (SC-FDE) to mitigate multiple-fading and interference effects. We present analytical performance results for the combined model over spatially correlated Rayleigh fading channels. We also show that it is beneficial to combine coding with equalization at the system’s receiving-end ensuring overall performance: a better performance over the traditional space-time trellis codes.
Communications and Network, Volume 09, pp 249-274; doi:10.4236/cn.2017.94018

Abstract:
Healthcare centers always aim to deliver the best quality healthcare services to patients and earn their satisfaction. Technology has played a major role in achieving these goals, such as clinical decision-support systems and mobile health social networks. These systems have improved the quality of care services by speeding-up the diagnosis process with accuracy, and allowing caregivers to monitor patients remotely through the use of WBS, respectively. However, these systems’ accuracy and efficiency are dependent on patients’ health information, which must be inevitably shared over the network, thus exposing them to cyber-attacks. Therefore, privacy-preserving services are ought to be employed to protect patients’ privacy. In this work, we proposed a privacy-preserving healthcare system, which is composed of two subsystems. The first is a privacy-preserving clinical decision-support system. The second subsystem is a privacy-preserving Mobile Health Social Network (MHSN). The former was based on decision tree classifier that is used to diagnose patients with new symptoms without disclosing patients’ records. Whereas the latter would allow physicians to monitor patients’ current condition remotely through WBS; thus sending help immediately in case of a distress situation detected. The social network, which connects patients of similar symptoms together, would also provide the service of seeking help of near-by passing people while the patient is waiting for an ambulance to arrive. Our model is expected to improve healthcare services while protecting patients’ privacy.
, Bader Yousef Obeidat, Noor Osama Aqqad, Marwa Na’El Khalil Al Janini,
Communications and Network, Volume 09, pp 28-53; doi:10.4236/cn.2017.91002

Abstract:
The purpose of this research is to investigate the interrelationships among the three behavioural constructs of job involvement, job satisfaction and organizational commitment. Accordingly, a structural model is developed to delineate the interactions among these constructs and explore the mediating effect of job satisfaction on the relationship between job involvement and organizational commitment. A questionnaire-based survey was designed to test the aforementioned model based on a dataset of 315 employees working in twelve out of twenty six banks operating in the capital city of Jordan, Amman. The model and posited hypotheses were tested using structural equation modelling analysis. The results indicated that job involvement positively and significantly affects job satisfaction and organizational commitment. Additionally, job satisfaction proved to be positively related to organizational commitment. Furthermore, job satisfaction positively and significantly partially mediated the relationship between job involvement and organizational commitment.
Lei Wang, Li Lin, Li Du
Communications and Network, Volume 09, pp 235-248; doi:10.4236/cn.2017.94017

Abstract:
The aggregate conversion from the complex physical network topology to the simple virtual topology reduces not only load overhead, but also the parameter distortion of links and nodes during the aggregation process, thereby increasing the accuracy of routing. To this end, focusing on topology aggregation of multi-domain optical networks, a new topology aggregation algorithm (ML-S) was proposed. ML-S upgrades linear segment fitting algorithms to multiline fitting algorithms on stair generation. It finds mutation points of stair to increase the number of fitting line segments and makes use of less redundancy, thus obtaining a significant improvement in the description of topology information. In addition, ML-S integrates stair fitting algorithm and effectively alleviates the contradiction between the complexity and accuracy of topology information. It dynamically chooses an algorithm that is more accurate and less redundant according to the specific topology information of each domain. The simulation results show that, under different topological conditions, ML-S maintains a low level of underestimation distortion, overestimation distortion, and redundancy, achieving an improved balance between aggregation degree and accuracy.
Hang Qin, Li Zhu
Communications and Network, Volume 09, pp 155-163; doi:10.4236/cn.2017.93010

Abstract:
To solve the problem of resource heterogeneity and the dynamic structure, loose coupling of integrated applications has brought a lot of benefits in clouds environment. Thus, the development of highly robust service-oriented applications has many challenges, especially for the autonomy of service resources over the system components to the end-user portal. In this paper, a proposed method for the business users can satisfy the service availability changes in the early warning and application for service relationship adjustment. Then, the designed mechanism can deal with exception not available for service in a real-time development application for a business user. Based on the heterogeneous model of service-oriented applications, an availability process with lifecycle analysis is proposed to ensure that service resources are available to integrate components at different levels.
Page of 10
Articles per Page
by
Show export options
  Select all
Back to Top Top