Applied Computing and Informatics

Journal Information
ISSN : 2210-8327
Current Publisher: Emerald (10.1108)
Former Publisher: Elsevier BV (10.1016)
Total articles ≅ 309
Current Coverage
SCOPUS
INSPEC
DOAJ
Archived in
SHERPA/ROMEO
Filter:

Latest articles in this journal

Loris Nanni,
Published: 4 May 2021
Applied Computing and Informatics; doi:10.1108/aci-03-2021-0051

Abstract:
Purpose Automatic DNA-binding protein (DNA-BP) classification is now an essential proteomic technology. Unfortunately, many systems reported in the literature are tested on only one or two datasets/tasks. The purpose of this study is to create the most optimal and universal system for DNA-BP classification, one that performs competitively across several DNA-BP classification tasks. Design/methodology/approach Efficient DNA-BP classifier systems require the discovery of powerful protein representations and feature extraction methods. Experiments were performed that combined and compared descriptors extracted from state-of-the-art matrix/image protein representations. These descriptors were trained on separate support vector machines (SVMs) and evaluated. Convolutional neural networks with different parameter settings were fine-tuned on two matrix representations of proteins. Decisions were fused with the SVMs using the weighted sum rule and evaluated to experimentally derive the most powerful general-purpose DNA-BP classifier system. Findings The best ensemble proposed here produced comparable, if not superior, classification results on a broad and fair comparison with the literature across four different datasets representing a variety of DNA-BP classification tasks, thereby demonstrating both the power and generalizability of the proposed system. Originality/value Most DNA-BP methods proposed in the literature are only validated on one (rarely two) datasets/tasks. In this work, the authors report the performance of our general-purpose DNA-BP system on four datasets representing different DNA-BP classification tasks. The excellent results of the proposed best classifier system demonstrate the power of the proposed approach. These results can now be used for baseline comparisons by other researchers in the field.
, Firman M. Firmansyah
Published: 30 April 2021
Applied Computing and Informatics; doi:10.1108/aci-12-2020-0156

Abstract:
Purpose In this study, the authors seek to understand factors that naturally influence users to adopt two-factor authentication (2FA) without even trying to intervene by investigating factors within individuals that may influence their decision to adopt 2FA by themselves. Design/methodology/approach A total of 1,852 individuals from all 34 provinces in Indonesia participated in this study by filling out online questionnaires. The authors discussed the results from statistical analysis further through the lens of the loss aversion theory. Findings The authors found that loss aversion, represented by higher income that translates to greater potential pain caused by losing things to be the most significant demographic factor behind 2FA adoption. On the contrary, those with a low-income background, even if they have some college degree, are more likely to skip 2FA despite their awareness of this technology. The authors also found that the older generation, particularly females, to be among the most vulnerable groups when it comes to authentication-based cyber threats as they are much less likely to adopt 2FA, or even to be aware of its existence in the first place. Originality/value Authentication is one of the most important topics in cybersecurity that is related to human-computer interaction. While 2FA increases the security level of authentication methods, it also requires extra efforts that can translate to some level of inconvenience on the user's end. By identifying the associated factors from the user's ends, a necessary intervention can be made so that more users are willing to jump on the 2FA adopters' train.
Arunit Maity, P. Prakasam, Sarthak Bhargava
Published: 1 April 2021
Applied Computing and Informatics; doi:10.1108/aci-10-2020-0105

The publisher has not yet granted permission to display this abstract.
Christophe Gaie, Bertrand Florat, Steven Morvan
Published: 1 April 2021
Applied Computing and Informatics; doi:10.1108/aci-12-2020-0159

The publisher has not yet granted permission to display this abstract.
Applied Computing and Informatics; doi:10.1108/aci-07-2020-0035

Abstract:
Purpose This paper presents the Edge Load Management and Optimization through Pseudoflow Prediction (ELMOPP) algorithm, which aims to solve problems detailed in previous algorithms; through machine learning with nested long short-term memory (NLSTM) modules and graph theory, the algorithm attempts to predict the near future using past data and traffic patterns to inform its real-time decisions and better mitigate traffic by predicting future traffic flow based on past flow and using those predictions to both maximize present traffic flow and decrease future traffic congestion. Design/methodology/approach ELMOPP was tested against the ITLC and OAF traffic management algorithms using a simulation modeled after the one presented in the ITLC paper, a single-intersection simulation. Findings The collected data supports the conclusion that ELMOPP statistically significantly outperforms both algorithms in throughput rate, a measure of how many vehicles are able to exit inroads every second. Originality/value Furthermore, while ITLC and OAF require the use of GPS transponders and GPS, speed sensors and radio, respectively, ELMOPP only uses traffic light camera footage, something that is almost always readily available in contrast to GPS and speed sensors.
Soha Rawas,
Published: 15 December 2020
Applied Computing and Informatics; doi:10.1108/aci-11-2020-0123

Abstract:
PurposeImage segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.Design/methodology/approachThe proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.FindingsOn the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.Originality/valueA novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.
, Nalini Chintalapudi, Francesco Amenta
Published: 10 December 2020
Applied Computing and Informatics; doi:10.1108/aci-09-2020-0059

Abstract:
PurposeAs of July 30, 2020, more than 17 million novel coronavirus disease 2019 (COVID-19) cases were registered including 671,500 deaths. Yet, there is no immediate medicine or vaccination for control this dangerous pandemic and researchers are trying to implement mathematical or time series epidemic models to predict the disease severity with national wide data.Design/methodology/approachIn this study, the authors considered COVID-19 daily infection data four most COVID-19 affected nations (such as the USA, Brazil, India and Russia) to conduct 60-day forecasting of total infections. To do that, the authors adopted a machine learning (ML) model called Fb-Prophet and the results confirmed that the total number of confirmed cases in four countries till the end of July were collected and projections were made by employing Prophet logistic growth model.FindingsResults highlighted that by late September, the estimated outbreak can reach 7.56, 4.65, 3.01 and 1.22 million cases in the USA, Brazil, India and Russia, respectively. The authors found some underestimation and overestimation of daily cases, and the linear model of actual vs predicted cases found a p-value (R2 value of 0.995.Originality/valueIn this paper, the authors adopted the Fb-Prophet ML model because it can predict the epidemic trend and derive an epidemic curve.
, Nalini Chintalapudi, Francesco Amenta
Published: 26 October 2020
Applied Computing and Informatics; doi:10.1108/aci-09-2020-0060

Abstract:
PurposeAfter the identification of a novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) at Wuhan, China, a pandemic was widely spread worldwide. In Italy, about 240,000 people were infected because of this virus including 34,721 deaths until the end of June 2020. To control this new pandemic, epidemiologists recommend the enforcement of serious mitigation measures like country lockdown, contact tracing or testing, social distancing and self-isolation.Design/methodology/approachThis paper presents the most popular epidemic model of susceptible (S), exposed (E), infected (I) and recovered (R) collectively called SEIR to understand the virus spreading among the Italian population.FindingsDeveloped SEIR model explains the infection growth across Italy and presents epidemic rates after and before country lockdown. The results demonstrated that follow-up of strict measures such that country lockdown along with high testing is making Italy practically a pandemic-free country.Originality/valueThese models largely help to estimate and understand how an infectious agent spreads in a particular country and how individual factors can affect the dynamics. Further studies like classical SEIR modeling can improve the quality of data and implementation of this modeling could represent a novelty of epidemic models.
, Francisco Liébana-Cabanillas, Juan Sánchez-Fernández, Luis Javier Herrera
Published: 12 October 2020
Applied Computing and Informatics; doi:10.1108/aci-06-2020-0003

Abstract:
PurposeThe aim of this research is to assess the influence of the underlying service quality variable, usually related to university students' perception of the educational experience. Another aspect analysed in this work is the development of a procedure to determine which variables are more significant to assess students' satisfaction.Design/methodology/approachIn order to achieve both goals, a twofold methodology was approached. In the first phase of research, an assessment of the service quality was performed with data gathered from 580 students in a process involving the adaptation of the SERVQUAL scale through a multi-objective optimization methodology. In the second phase of research, results obtained from students were compared with those obtained from the teaching staff at the university.FindingsResults from the analysis revealed the most significant service quality dimensions from the students' viewpoint according to the scores that they provided. Comparison of the results with the teaching staff showed noticeable differences when assessing academic quality.Originality/valueSignificant conclusions can be drawn from the theoretical review of the empirical evidences obtained through this study helping with the practical design and implementation of quality strategies in higher education especially in regard to university education.
Siddarth Nair, Abhishek Kaushik, Harnaik Dhoot
Published: 14 September 2020
Applied Computing and Informatics; doi:10.1016/j.aci.2019.05.001

The publisher has not yet granted permission to display this abstract.
Back to Top Top