Journal of Software Engineering and Applications

Journal Information
ISSN / EISSN : 19453116 / 19453124
Current Publisher: Scientific Research Publishing, Inc, (10.4236)
Total articles ≅ 896
Archived in
SHERPA/ROMEO
Filter:

Latest articles in this journal

Nor Laily Hashim, Ahmed Jama Isse
Journal of Software Engineering and Applications, Volume 12, pp 267-277; doi:10.4236/jsea.2019.127016

Talal H. Alzanki, Mutaz M. Jafar
Journal of Software Engineering and Applications, Volume 12, pp 278-292; doi:10.4236/jsea.2019.127017

Nazish Yousaf, Maham Akram, Amna Bhatti, Ammara Zaib
Journal of Software Engineering and Applications, Volume 12, pp 293-306; doi:10.4236/jsea.2019.127018

Shinji Kikuchi, Subhash Bhalla
Journal of Software Engineering and Applications, Volume 12, pp 339-364; doi:10.4236/jsea.2019.129021

Abstract:We have considered a method called Enhanced Rollback Migration Protocol, which potentially has the effects of compressing the period of compensations in a long-lived transaction, since before. In general, a compensation transaction can recover an irregular status of a long-lived transaction into the original status without holding unnecessary resources by making its consistency tentatively loose. However, it has also been pointed out that there is a difficulty of maintaining the isolation between a pair of transactions when executed in parallel. In particular, this could be more prominent under modernized scalable cloud environments. Thus, there is a proposal for concurrency control for the service level. However, there is still another risk that more computer resources will be consumed than actually necessary and an unnecessary stagnation of the processing will be caused if concurrency control is naively applied without careful consideration. Therefore, we need to implement a functionality which can optimize the processing of a long-lived transaction by selecting a suitable method between concurrency control and compensation transactions. In this paper, we propose a method in which optimistic concurrency control is applied for long-lived transactions. Furthermore, a pair of verification phases is carried out. At the beginning from a safe point, an attempt of verification is done. Then if the difficulty of isolation on a long-lived transaction executed under a competitive situation is estimated, concurrency control for the service level is applied. Alternatively, a long-lived transaction without any concurrency control is executed. At the next reachable safe point, another attempt of verification is performed. Then if a failure of serialization is detected, a set of compensation transactions is invoked to recover the original long-lived transaction by returning to the first safe point. We evaluated this approach by using numerical simulations and confirmed the basic features. This approach can realize optimizing and enhancing the performance of a long-lived transaction. We regard this approach applicable even to the modernized scalable cloud environments.
Junyi Cao, Zhongming Tian, Zhengtao Wang
Journal of Software Engineering and Applications, Volume 12, pp 383-392; doi:10.4236/jsea.2019.129023

Abstract:This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transferred in one channel. After utilizing a Butterworth bandpass filter, we chose a novel way to detect gesture action segment. Instead of using moving average algorithm, which is based on the calculation of energy, We developed an algorithm based on the Hilbert transform to find a dynamic threshold and identified the action segment. Four features have been extracted from each activity section, generating feature vectors for classification. During the process of classification, we made a comparison between K-nearest-neighbors (KNN) and support vector machine (SVM), based on a relatively small amount of samples. Most common experiments are based on a large quantity of data to pursue a highly fitted model. But there are certain circumstances where we cannot obtain enough training data, so it makes the exploration of best method to do classification under small sample data imperative. Though KNN is known for its simplicity and practicability, it is a relatively time-consuming method. On the other hand, SVM has a better performance in terms of time requirement and recognition accuracy, due to its application of different Risk Minimization Principle. Experimental results show an average recognition rate for the SVM algorithm that is 1.25% higher than for KNN while SVM is 2.031 s shorter than that KNN.
Richard Skeggs, Stasha Lauria
Journal of Software Engineering and Applications, Volume 12, pp 365-382; doi:10.4236/jsea.2019.129022

Abstract:The performance and reliability of converting natural language into structured query language can be problematic in handling nuances that are prevalent in natural language. Relational databases are not designed to understand language nuance, therefore the question why we must handle nuance has to be asked. This paper is looking at an alternative solution for the conversion of a Natural Language Query into a Structured Query Language (SQL) capable of being used to search a relational database. The process uses the natural language concept, Part of Speech to identify words that can be used to identify database tables and table columns. The use of Open NLP based grammar files, as well as additional configuration files, assist in the translation from natural language to query language. Having identified which tables and which columns contain the pertinent data the next step is to create the SQL statement.
Zhiying Meng
Journal of Software Engineering and Applications, Volume 12, pp 423-431; doi:10.4236/jsea.2019.1210026

Abstract:Because of the increasing attention on environmental issues, especially air pollution, predicting whether a day is polluted or not is necessary to people’s health. In order to solve this problem, this research is classifying ground ozone level based on big data and machine learning models, where polluted ozone day has class 1 and non-ozone day has class 0. The dataset used in this research was derived from the UCI Website, containing various environmental factors in Houston, Galveston and Brazoria area that could possibly affect the occurrence of ozone pollution [1]. This dataset is first filled up for further process, next standardized to ensure every feature has the same weight, and then split into training set and testing set. After this, five different machine learning models are used in the prediction of ground ozone level and their final accuracy scores are compared. In conclusion, among Logistic Regression, Decision Tree, Random Forest, AdaBoost, and Support Vector Machine (SVM), the last one has the highest test score of 0.949. This research utilizes relatively simple methods of forecasting and calculates the first accuracy scores in predicting ground ozone level; it can thus be a reference for environmentalists. Moreover, the direct comparison among five different models provides machine learning field an insight to determine the most accurate model. In the future, Neural Network can also be utilized to predict air pollution, and its test scores can be compared with the previous five methods to conclude the accuracy of Neuron Network.
Shih-Shinh Huang, Shih-Yu Lin, Pei-Yung Hsiao
Journal of Software Engineering and Applications, Volume 12, pp 1-19; doi:10.4236/jsea.2019.121001

Kamel Oussaid, Abderazak El Ouafi
Journal of Software Engineering and Applications, Volume 12, pp 509-523; doi:10.4236/jsea.2019.1212031

Abstract:Predictive modelling for quality analysis becomes one of the most critical requirements for a continuous improvement of reliability, efficiency and safety of laser welding process. Accurate and effective model to perform non-destructive quality estimation is an essential part of this assessment. This paper presents a structured approach developed to design an effective artificial neural network based model for predicting the weld bead dimensional characteristic in laser overlap welding of low carbon galvanized steel. The modelling approach is based on the analysis of direct and interaction effects of laser welding parameters such as laser power, welding speed, laser beam diameter and gap on weld bead dimensional characteristics such as depth of penetration, width at top surface and width at interface. The data used in this analysis was derived from structured experimental investigations according to Taguchi method and exhaustive FEM based 3D modelling and simulation efforts. Using a factorial design, different neural network based prediction models were developed, implemented and evaluated. The models were trained and tested using experimental data, supported with the data generated by the 3D simulation. Hold-out test and k-fold cross validation combined to various statistical tools were used to evaluate the influence of the laser welding parameters on the performances of the models. The results demonstrated that the proposed approach resulted successfully in a consistent model providing accurate and reliable predictions of weld bead dimensional characteristics under variable welding conditions. The best model presents prediction errors lower than 7% for the three weld quality characteristics.
Jiansheng Wu, Yongsheng Xie
Journal of Software Engineering and Applications, Volume 12, pp 524-539; doi:10.4236/jsea.2019.1212032

Abstract:Accurate and timely monthly rainfall forecasting is a major challenge for the scientific community in hydrological research such as river management project and design of flood warning systems. Support Vector Regression (SVR) is a very useful precipitation prediction model. In this paper, a novel parallel co-evolution algorithm is presented to determine the appropriate parameters of the SVR in rainfall prediction based on parallel co-evolution by hybrid Genetic Algorithm and Particle Swarm Optimization algorithm, namely SVRGAPSO, for monthly rainfall prediction. The framework of the parallel co-evolutionary algorithm is to iterate two GA and PSO populations simultaneously, which is a mechanism for information exchange between GA and PSO populations to overcome premature local optimum. Our methodology adopts a hybrid PSO and GA for the optimal parameters of SVR by parallel co-evolving. The proposed technique is applied over rainfall forecasting to test its generalization capability as well as to make comparative evaluations with the several competing techniques, such as the other alternative methods, namely SVRPSO (SVR with PSO), SVRGA (SVR with GA), and SVR model. The empirical results indicate that the SVRGAPSO results have a superior generalization capability with the lowest prediction error values in rainfall forecasting. The SVRGAPSO can significantly improve the rainfall forecasting accuracy. Therefore, the SVRGAPSO model is a promising alternative for rainfall forecasting.