#### Algorithms

Journal Information
EISSN : 1999-4893
Current Publisher: MDPI AG (10.3390)
Total articles ≅ 1,565
Current Coverage
SCOPUS
INSPEC
ESCI
COMPENDEX
DOAJ
Archived in
EBSCO
SHERPA/ROMEO
Filter:

#### Latest articles in this journal

Published: 17 June 2021
Algorithms, Volume 14; doi:10.3390/a14060185

Abstract:
In this paper, we develop fuzzy, possibilistic hypothesis tests for testing crisp hypotheses for a distribution parameter from crisp data. In these tests, fuzzy statistics are used, which are produced by the possibility distribution of the estimated parameter, constructed by the known from crisp statistics confidence intervals. The results of these tests are in much better agreement with crisp statistics than the ones produced by the respective tests of a popular book on fuzzy statistics, which uses fuzzy critical values. We also present an error that we found in the implementation of the unbiased fuzzy estimator of the variance in this book, due to a poor interpretation of its mathematical content, which leads to disagreement of some fuzzy hypotheses tests with their respective crisp ones. Implementing correctly this estimator, we produce test statistics that achieve results in hypotheses tests that are in much better agreement with the results of the respective crisp ones.
Published: 15 June 2021
Algorithms, Volume 14; doi:10.3390/a14060184

Abstract:
Many mixed datasets with both numerical and categorical attributes have been collected in various fields, including medicine, biology, etc. Designing appropriate similarity measurements plays an important role in clustering these datasets. Many traditional measurements treat various attributes equally when measuring the similarity. However, different attributes may contribute differently as the amount of information they contained could vary a lot. In this paper, we propose a similarity measurement with entropy-based weighting for clustering mixed datasets. The numerical data are first transformed into categorical data by an automatic categorization technique. Then, an entropy-based weighting strategy is applied to denote the different importances of various attributes. We incorporate the proposed measurement into an iterative clustering algorithm, and extensive experiments show that this algorithm outperforms OCIL and K-Prototype methods with 2.13% and 4.28% improvements, respectively, in terms of accuracy on six mixed datasets from UCI.
Published: 13 June 2021
Algorithms, Volume 14; doi:10.3390/a14060183

Abstract:
Since January 2020, the outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has affected the whole world, producing a respiratory disease that can become severe and even cause death in certain groups of people. The main method for diagnosing coronavirus disease 2019 (COVID-19) is performing viral tests. However, the kits for carrying out these tests are scarce in certain regions of the world. Lung conditions as perceived in computed tomography and radiography images exhibit a high correlation with the presence of COVID-19 infections. This work attempted to assess the feasibility of using convolutional neural networks for the analysis of pulmonary radiography images to distinguish COVID-19 infections from non-infected cases and other types of viral or bacterial pulmonary conditions. The results obtained indicate that these networks can successfully distinguish the pulmonary radiographies of COVID-19-infected patients from radiographies that exhibit other or no pathology, with a sensitivity of 100% and specificity of 97.6%. This could help future efforts to automate the process of identifying lung radiography images of suspicious cases, thereby supporting medical personnel when many patients need to be rapidly checked. The automated analysis of pulmonary radiography is not intended to be a substitute for formal viral tests or formal diagnosis by a properly trained physician but rather to assist with identification when the need arises.
Published: 9 June 2021
Algorithms, Volume 14; doi:10.3390/a14060182

Abstract:
We proposed the method that translates the two-dimensional CSP for minimizing the number of cuts to the Ising model. After that, we conducted computer experiments of the proposed model using the benchmark problem. From the above, the following results are obtained. (1) The proposed Ising model adequately represents the target problem. (2) Acceptance rates were as low as 0.2% to 9.8% and from 21.8% to 49.4%. (3) Error rates from optimal solution were as broad as 0% to 25.9%. For future work, we propose the following changes: (1) Improve the Hamiltonian for constraints. (2) Improve the proposed model to adjust more complex two-dimensional CSP and reduce the number of spins when it deals with large materials and components. (3) Conduct experiments using a quantum annealer.
Published: 8 June 2021
Algorithms, Volume 14; doi:10.3390/a14060180

Abstract:
The shallow features extracted by the traditional artificial intelligence algorithm-based damage identification methods pose low sensitivity and ignore the timing characteristics of vibration signals. Thus, this study uses the high-dimensional feature extraction advantages of convolutional neural networks (CNNs) and the time series modeling capability of long short-term memory networks (LSTM) to identify damage to long-span bridges. Firstly, the features extracted by CNN and LSTM are fused as the input of the fully connected layer to train the CNN-LSTM model. After that, the trained CNN-LSTM model is employed for damage identification. Finally, a numerical example of a large-span suspension bridge was carried out to investigate the effectiveness of the proposed method. Furthermore, the performance of CNN-LSTM and CNN under different noise levels was compared to test the feasibility of application in practical engineering. The results demonstrate the following: (1) the combination of CNN and LSTM is satisfactory with 94% of the damage localization accuracy and only 8.0% of the average relative identification error (ARIE) of damage severity identification; (2) in comparison to the CNN, the CNN-LSTM results in superior identification accuracy; the damage localization accuracy is improved by 8.13%, while the decrement of ARIE of damage severity identification is 5.20%; and (3) the proposed method is capable of resisting the influence of environmental noise and acquires an acceptable recognition effect for multi-location damage; in a database with a lower signal-to-noise ratio of 3.33, the damage localization accuracy of the CNN-LSTM model is 67.06%, and the ARIE of the damage severity identification is 31%. This work provides an innovative idea for damage identification of long-span bridges and is conducive to promote follow-up studies regarding structural condition evaluation.
Published: 8 June 2021
Algorithms, Volume 14; doi:10.3390/a14060181

Abstract:
OPTCON is an algorithm for the optimal control of nonlinear stochastic systems which is particularly applicable to economic models. It delivers approximate numerical solutions to optimum control (dynamic optimization) problems with a quadratic objective function for nonlinear economic models with additive and multiplicative (parameter) uncertainties. The algorithm was first programmed in C# and then in MATLAB. It allows for deterministic and stochastic control, the latter with open loop (OPTCON1), passive learning (open-loop feedback, OPTCON2), and active learning (closed-loop, dual, or adaptive control, OPTCON3) information patterns. The mathematical aspects of the algorithm with open-loop feedback and closed-loop information patterns are presented in more detail in this paper.
Published: 5 June 2021
Algorithms, Volume 14; doi:10.3390/a14060179

Abstract:
With the development of the sharing economy, carsharing is a major achievement in the current mode of transportation in sharing economies. Carsharing can effectively alleviate traffic congestion and reduce the travel cost of residents. However, due to the randomness of users’ travel demand, carsharing operators are faced with problems, such as imbalance in vehicle demand at stations. Therefore, scientific prediction of users’ travel demand is important to ensure the efficient operation of carsharing. The main purpose of this study is to use gradient boosting decision tree to predict the travel demand of station-based carsharing users. The case study is conducted in Lanzhou City, Gansu Province, China. To improve the accuracy, gradient boosting decision tree is designed to predict the demands of users at different stations at various times based on the actual operating data of carsharing. The prediction results are compared with results of the autoregressive integrated moving average. The conclusion shows that gradient boosting decision tree has higher prediction accuracy. This study can provide a reference value for user demand prediction in practical application.
Published: 3 June 2021
Algorithms, Volume 14; doi:10.3390/a14060178

Abstract:
This paper presents the parameter optimisation of the flight control system of a singlerotor medium-scale rotorcraft. The six degrees-of-freedom (DOF) nonlinear mathematical model of the rotorcraft is developed. This model is then used to develop proportional–integral–derivative (PID)-based controllers. Since the majority of PID controllers installed in industry are poorly tuned, this paper presents a comparison of the optimised tuning of the flight controller parameters using particle swarm optimisation (PSO), genetic algorithm (GA), ant colony optimisation (ACO) and cuckoo search (CS) optimisation algorithms. The aim is to find the best PID parameters that minimise the specified objective function. Two trim conditions are investigated, i.e., hover and 10 m/s forward flight. The four algorithms performed better than manual tuning of the PID controllers. It was found, through numerical simulation, that the ACO algorithm converges the fastest and finds the best gains for the selected objective function in hover trim conditions. However, for 10 m/s forward flight trim, the GA algorithm was found to be the best. Both the tuned flight controllers managed to reject a gust wind of up to 5 m/s in the lateral axis in hover and in forward flight.
Published: 2 June 2021
Algorithms, Volume 14; doi:10.3390/a14060177

Abstract:
Probabilistic solar power forecasting has been critical in Southern Africa because of major shortages of power due to climatic changes and other factors over the past decade. This paper discusses Gaussian process regression (GPR) coupled with core vector regression for short-term hourly global horizontal irradiance (GHI) forecasting. GPR is a powerful Bayesian non-parametric regression method that works well for small data sets and quantifies the uncertainty in the predictions. The choice of a kernel that characterises the covariance function is a crucial issue in Gaussian process regression. In this study, we adopt the minimum enclosing ball (MEB) technique. The MEB improves the forecasting power of GPR because the smaller the ball is, the shorter the training time, hence performance is robust. Forecasting of real-time data was done on two South African radiometric stations, Stellenbosch University (SUN) in a coastal area of the Western Cape Province, and the University of Venda (UNV) station in the Limpopo Province. Variables were selected using the least absolute shrinkage and selection operator via hierarchical interactions. The Bayesian approach using informative priors was used for parameter estimation. Based on the root mean square error, mean absolute error and percentage bias the results showed that the GPR model gives the most accurate predictions compared to those from gradient boosting and support vector regression models, making this study a useful tool for decision-makers and system operators in power utility companies. The main contribution of this paper is in the use of a GPR model coupled with the core vector methodology which is used in forecasting GHI using South African data. This is the first application of GPR coupled with core vector regression in which the minimum enclosing ball is applied on GHI data, to the best of our knowledge.
Published: 1 June 2021
Algorithms, Volume 14; doi:10.3390/a14060175

Abstract:
Understanding how different two organisms are is one question addressed by the comparative genomics field. A well-accepted way to estimate the evolutionary distance between genomes of two organisms is finding the rearrangement distance, which is the smallest number of rearrangements needed to transform one genome into another. By representing genomes as permutations, one of them can be represented as the identity permutation, and, so, we reduce the problem of transforming one permutation into another to the problem of sorting a permutation using the minimum number of rearrangements. This work investigates the problems of sorting permutations using reversals and/or transpositions, with some additional restrictions of biological relevance. Given a value $\lambda$, the problem now is how to sort a $\lambda$-permutation, which is a permutation whose elements are less than $\lambda$ positions away from their correct places (regarding the identity), by applying the minimum number of rearrangements. Each $\lambda$-rearrangement must have size, at most, $\lambda$, and, when applied to a $\lambda$-permutation, the result should also be a $\lambda$-permutation. We present algorithms with approximation factors of $O\left({\lambda }^{2}\right)$, $O\left(\lambda \right)$, and $O\left(1\right)$ for the problems of Sorting $\lambda$-Permutations by $\lambda$-Reversals, by $\lambda$-Transpositions, and by both operations.