Refine Search

New Search

Results in Azerbaijan Journal of High Performance Computing: 79

(searched for: container_group_id:93591)
Page of 2
Articles per Page
Show export options
  Select all
Lida Naderlou, Non-Profit Higher Education Institutions Roozbeh Zanjan Branch, Zahra Tayyebi Qasabeh, Payame Noor University of Guilan
Azerbaijan Journal of High Performance Computing, Volume 5, pp 72-86;

Science and technology are proliferating, and complex networks have become a necessity in our daily life, so separating people from complex networks built on the fundamental needs of human life is almost impossible. This research presented a multi-layer dynamic social networks model to discover influential groups based on a developing frog-leaping algorithm and C-means clustering. We collected the data in the first step. Then, we conducted data cleansing and normalization to identify influential individuals and groups using the optimal data by forming a decision matrix. Hence, we used the matrix to identify and cluster (based on phase clustering) and determined each group’s importance. The frog-leaping algorithm was used to improve the identification of influence parameters, which led to improvement in node’s importance, to discover influential individuals and groups in social networks, In the measurement and simulation of clustering section, the proposed method was contrasted against the K-means method, and its equilibrium value in cluster selection resulted from 5. The proposed method presented a more genuine improvement compared to the other methods. However, measuring precision indicators for the proposed method had a 3.3 improvement compared to similar methods and a 3.8 improvement compared to the M-ALCD primary method.
Mohammad Azimnezhad, Science and Research Islamic Azad University, Mohammad Manthouri, Mohammad Teshnehlab, Shahed University, K.N. Toosi university
Azerbaijan Journal of High Performance Computing, Volume 5, pp 143-164;

This paper proposes a vaccination approach based on robust control for the SEIR (susceptible plus exposed plus infectious plus recovered populations) model of epidemic diseases. First, a classic sliding mode controller is investigated based on the SEIR model. Next, fuzzy logic is utilized to better approximate the uncertainties in the SEIR system using the sliding mode controller. Therefore, the proposed controller is a fuzzy sliding mode controller, which, compared to the sliding mode controller, provides an appropriate estimation of systems' actual parameters and removes the chattering phenomenon from the control signal. The stability of the controlled system is guaranteed using the Lyapunov theory simulations in which the classical sliding mode and the proposed controllers are compared, Using data from previous articles. Simulation results show that the proposed controller eliminates the susceptible subpopulation, incubated disease, and infectious diseases, eradicating the disease. Comparison with other methods reveals the better efficiency of the proposed method.
Armin Rabieifard, Non-Profit Higher Education Institutions Sardarjangal Branch, Lida Naderlou, Zahra Tayyebi Qasabeh, Payame Noor university of Guilan
Azerbaijan Journal of High Performance Computing, Volume 5, pp 94-111;

Today, energy consumption is important in calculating the heating and cooling loads of residential, industrial, and other units. In order to calculate, design, and select the heating-cooling system, a suitable method of consumption and cost analysis is needed to prepare the required data for air conditioning motors and design an intelligent system. In this research, a method for balancing the temperature of an intelligent building in the context of the Internet of Things is presented based on a combination of network cutting and clustering techniques. In order to achieve the optimization of the algorithm in this method, it is necessary to convert heterogeneous data into homogeneous data, which was done by introducing a complex network and appropriate clustering techniques. In this method, information was collected by the IoT, and a graph matrix of these data was generated, then recorded by an artificial intelligence method and a combination of three methods of hierarchical clustering, Gaussian mixture, and K-means for comparison with the preliminary results. Finally, due to the reliability of the K-means method and the use of majority voting for weights, the K-means method reached 0.4 and was selected as the clustering method. The main part of the proposed method is based on different classifications in Appropriate criteria that were evaluated. Acceptable results were recorded so that with the minimum value of 88% and the highest value of about 100, the results of the proposed method can be confirmed. All hypotheses of the method can be declared possible and acceptable.
Araz Aliev, Azerbaijan State Oil and Industry University, Yunis Gahramanli, Samir Aliyev, Institute of Mathematics and Mechanics of Ministry of Science and Education of the Republic of Azerbaijan
Azerbaijan Journal of High Performance Computing, Volume 5, pp 87-93;

This paper described the opportunity to use artificial neural networks to predict the chemical reaction result under given conditions. Applied three layers neural network for prediction of the mass content of alkaline trained using the results of the chemical reactions. As inputs were used values of the chemical quantities before the reaction and output values of the chemical quantities after the reaction. HPC technologies and multi-worker technology were used for accurate results.
Aliaa Kadhim Gabbar Alwaeli, Islamic Aazd University, Karrar Ezzulddin Kareem Al-Hamami
Azerbaijan Journal of High Performance Computing, Volume 5, pp 131-142;

Utilizing virtualization technology, a cloud computing service provides on-demand access to computer resources & services through the internet. There are new ways to control the functioning of cloud resources since they are constructed in diverse places. More than one algorithm Possibly included in these plans. One of the most important components of a high-performing cloud computing system is the scheduling mechanism. Fault tolerance & load balancing methods are also included in the scheduling system, not only task scheduling strategies. Fault handling Possibly accomplished via using scheduling systems. We analyze & contrast a few different scheduling algorithms to see what they have going for them in terms of benefits & drawbacks.
North Tehran Branch Azad University, Faezeh Gholamrezaie, Arash Hosseini, Nigar Ismayilova, Shahed University, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 5, pp 57-71;

Renewable energy is one of the most critical issues of continuously increasing electricity consumption which is becoming a desirable alternative to traditional methods of electricity generation such as coal or fossil fuels. This study aimed to develop, evaluate, and compare the performance of Linear multiple regression (MLR), support vector regression (SVR), Bagging and random forest (R.F.), and decision tree (CART) models in predicting wind speed in Southeastern Iran. The data used in this research is related to the statistics of 10 minutes of wind speed in 10-meter, 30-meter, and 40-meter wind turbines, the standard deviation of wind speed, air temperature, humidity, and amount of the Sun's radiation. The bagging and random forest model with an RMSE error of 0.0086 perform better than others in this dataset, while the MLR model with an RMSE error of 0.0407 has the worst.
Zahra Tayyebi Qasabeh, Payame Noor University of Guilan, Seyyed Sajjad Mousavi, Pol Talshan Azad University
Azerbaijan Journal of High Performance Computing, Volume 5, pp 33-51;

The blockchain is a revolutionary technology transforming how assets are managed digitally and securely on a distributed network. Blockchain decentralized technology can solve distrust problems of the traditional centralized network and enhance the privacy and security of data. It provides a distinct way of storing and sharing data through blocks chained together. The blockchain is highly appraised and endorsed for its decentralized infrastructure and peer-to-peer nature. However, much research about the blockchain is shielded by Bitcoin. But blockchain could be applied to a variety of fields far beyond Bitcoin. Blockchain has shown its potential for transforming the traditional industry with its essential characteristics: decentralization, persistency, anonymity, and audibility. Undoubtedly, blockchain technology can significantly change the global business environment and lead to a paradigm shift in the functioning of the business world. However, to unlock the tremendous potential, various challenges in the adoption and viability of blockchain technology must be addressed before we can see the legal, economic, and technical viability of this technology in the operation of various business applications. In this study, the fundamental concepts of blockchain are discussed at the beginning, and the way it works and its architecture is mentioned, and since all technologies face challenges, this technology is no exception and has challenges based on the works related to the challenges It is mentioned.
Elviz Ismayilov, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 5, pp 52-56;

Cloud technologies are currently one of the fastest growing directions in the IT field. This architecture uses virtualization technology in several computing paradigms (distributed systems, grid and service computing, etc.). It is possible to reach the goal using the unlimited possibilities of the Internet. It should be noted that most companies have transferred their resources and capabilities to cloud technology. According to Check Point Software Technologies Ltd 2020 statistics, 39% of enterprises said that security is important in cloud technology, and 52% said that public and hybrid cloud technologies have become more critical in the direction of security over the past two years. Enterprises are concerned about personal data storage and using special software enabled by cloud technologies. They are considering these points. This paper also discusses the various benefits of the cloud with its challenges and applications
Faezeh Gholamrezaie, Azar Feyziyev, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 5, pp 3-32;

The effect of dynamic and interactive events on the function of the elements that make up the computing system manager causes the time required to run the user program to increase or the operation of these elements to change. These changes either increase the execution time of the scientific program or make the system incapable of executing the program. Computational processes on the migration process and vector algebras try to analyze and enable the Flushing process migration mechanism in support of distributed Exascale systems despite dynamic and interactive events. This paper investigates the Flushing process migration management mechanism in distributed Exascale systems, the effects of dynamic and interactive occurrences in the computational system, and the impact of dynamic and interactive events on the system.
Hadis Oftadeh, Islamic Azad University, Mohammad Manthouri, Shahed University
Azerbaijan Journal of High Performance Computing, Volume 5, pp 112-130;

Correct diagnosis of diseases is the main problem in medicine. Artificial intelligence and learning methods have been developed to solve problems in many fields, such as biology and medical sciences. Correct diagnosis before treatment is the most challenging and the first step in achieving proper cures. The primary objective of this paper is to introduce an intelligent system that develops beyond the deep neural network. It can diagnose and distinguish between Hepatitis types B and C by using a set of general tests for liver health. The deep network used in this research is the Deep Boltzmann Machine (DBM). Learning components within Restricted Boltzmann Machine (RBM) lead to intended results. The RBMs extract features to be used in an efficient classification process. An RBM is robust computing and well-suited to extract high-level features and diagnose hepatitis B and C. The method was tested on general items in laboratory tests that check the liver’s health. The DBM could predict hepatitis type B and C with an accuracy between 90.1% and 92.04%. Predictive accuracy was obtained with10-fold cross-validation. Compared with other methods, simulation results on DBM architecture reveal the proposed method’s efficiency in diagnosing Hepatitis B and C. What made this approach successful was a deep learning network in addition to discovering communication and extracting knowledge from the data.
Iqra Rashid, COMSATS University Islamabad, Javeria Naz
Azerbaijan Journal of High Performance Computing, Volume 4, pp 206-231;

In recent years breast cancer detection has been the most popular research topic in medical image analysis. It is the most common malignancy in women, and men can also be affected. Conferring to the American Cancer Society, in 2019, almost two million new cases were registered, and the death rate was almost 41,000. The death rate can be reduced if the cancer is timely diagnosed. For cancer detection, different modalities are used, like MRI, ultrasound, and mammography. The most common and popular modality is mammography. A mammogram shows breast irregularities that are benign or malignant. In digital mammography, it is not easy to extract accurate breast regions. The main problem in the extraction region of concern is pectoral muscle suppression. The pectoral muscle appears in the breast area. Sometimes it is marked as an area of attention that causes a false positive rate. It is essential to eradicate pectoral muscles from the breast. This manuscript overviews the introduction of basic breast cancer terminologies. The work also analyzes state-of-the-insight imaging procedures used for breast cancer analysis.
Azerbaijan Journal of High Performance Computing, Volume 4, pp 170-187;

There is a possibility of dynamic and interactive nature occurring at any moment of the scientific program implementation process in the computing system. While affecting the computational processes in the system, dynamic and interactive occurrence also affects the function of the elements that make up the management element of the computing system. The effect of dynamic and interactive events on the function of the elements that make up the management element of the computing system causes the time required to run the user program to increase or the function of these elements to change. These changes either increase the execution time of the scientific program or make the system incapable of executing the program. The occurrence of dynamic and interactive nature creates new situations in the computing system that the mechanisms to deal with when designing the computing system are not defined and considered. In this paper, the Lazy-Copy process migration management mechanism, specifically the Lazy-Copy mechanism in distributed large-scale systems, the effects of dynamic and interactive occurrence in the computational system investigate, and the effects of dynamic and interactive occurrence on the system investigate. Computational processes on the migration process and vector algebras try to analyze and enable the Lazy-Copy process migration mechanism in support of distributed largescale systems despite dynamic and interactive events
Naveen S Pagad, Visvesvaraya Technological University, Pradeep N, Bapuji Institute of Engineering and Technology
Azerbaijan Journal of High Performance Computing, Volume 4, pp 232-241;

In light of the increasing number of clinical narratives, a modern framework for assessing patient histories and carrying out clinical research has been developed. As a consequence of using existing approaches, the process for recognizing clinical entities and extracting relations from clinical narratives was subsequently error propagated. Thus, we propose an end-to-end clinical relation extraction model in this paper. Clinical XLNet has been used as the base model to address the discrepancy issue, and the proposed work has been tested with the N2C2 corpus.
Zdzislaw Polkowski, Wroclaw University of Economics and Business, Agnieszka Wierzbicka
Azerbaijan Journal of High Performance Computing, Volume 4, pp 135-154;

The cosmetics industry is one of the Polish economy’s largest and most promising branches. As per the data retrieved since 2021, there are about 400 cosmetics manufacturers in Poland and the largest number of start-ups in the cosmetics sector in Europe. The Industry 4.0 concept in this area assumes that cosmetic companies are implementing intelligent solutions and an effective supply chain of ecological raw materials. In the future, they will be referred to as the Smart Factories. The article aims to present visible trends in cosmetics companies in Industry 4.0. The paper describes the genesis and concept of Industry 4.0. Then, the most critical technologies that shape the development of the fourth industrial revolution in the cosmetics industry are presented and characterized. Also, the research conclusions and further studies on Industry 5.0 and 6.0 have been presented. Along with this, the research problem regarding the ignorance, part of cosmetic companies regarding modern technological solutions, and the lack of qualified staff who could implement the new ICT solution have been focused on in this paper.
Dhofar University, Umer Farooq, Prajoona Valsalan, Najam Ul Hasan, Manaf Zghaibeh
Azerbaijan Journal of High Performance Computing, Volume 4, pp 188-197;

Multi-Carrier Waveform (MCW) modeling and design are envisioned as one of the most important and challenging for the 6th generation (6G) communication networks. In oppose to Orthogonal Frequency Division Multiplexing (OFDM) waveforms, new and innovative design techniques for MCWs have been designed and proposed in recent literature because of their performance superiority. The typical OFDM waveforms have dominated the previous generation of communication systems and proven their potential in many real-time communication environments, but it may not be sufficient to meet the ambitious target of 6G communication systems. Hence, need for new solutions like flexible MCWs and relevant technological advancements in waveform design are needed. This paper proposes designing and evaluating a new MCW design to meet the 6G requirements for spectral efficiency, throughput, and overall system capacity. On the transmitter side, the MCW design proposed in this article employs power domain multiplexing, such as Non-Orthogonal Multiple Access (NOMA), and phase-rotations of the input signal to the Universal Filtered Multi-Carriers (UFMC) modulations, where the Base-Station (BS) assigns different power levels to each user while using the same frequency resources. MATLAB® simulations were performed to assess the proposed MCW performance. Detailed simulation data are employed for comparative performance analysis of the proposed MCW. The results have shown the superior performance of the proposed MCW approach compared to the conventional 5th generation (5G) NOMA-UFMC waveform.
Hafiz Gulfam Ahmad, Ghazi University, Muhammad Jasim Shah, Emerson University
Azerbaijan Journal of High Performance Computing, Volume 4, pp 267-279;

Cardiovascular Diseases (CVDs) are one of the most common health problems nowadays. Early diagnosis of heart disease is a significant concern for health professionals in medical centers. An incorrect forecast is more likely to have negative effects, such as disability or even death. Our research is motivated by the desire to predict cardiovascular diseases based on data mining that can be valuable to medical centers. Various data mining approaches are used for the early detection of cardiac diseases. This paper examines several research publications that work on various heart diseases. We compare and contrast several machine learning methods, such as KNN, ANN, Decision Tree, SVM, and Random Forest. We looked at 918 observations with several features related to heart disease. A comparative study with age and sex is established to predict cardiac disease using the decision tree approach. Our dataset contains 11 features that are used to forecast possible heart disease. One of the attributes indicates that the age factor has the most significant impact on heart disease. According to our findings, heart attacks cause four out of every five CVD deaths, with one-third of these deaths occurring suddenly in those under 70.
Nigar Ismayilova, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 4, pp 198-205;

In this paper were studied opportunities of using fuzzy sets theory for constructing an appropriate load balancing model in Exascale distributed systems. The occurrence of dynamic and interactive events in multicore computing systems leads to uncertainty. As the fuzzy logic-based solutions allow the management of uncertain environments, there are several approaches and useful challenges for the development of load balancing models in Exascale computing systems.
Mohsin Naseer, COMSATS University Islamabad, Javeria Naz
Azerbaijan Journal of High Performance Computing, Volume 4, pp 242-262;

Nowadays, people’s lives are becoming more and more luxurious with the use of technologies. Everyone wants ease and comfort. The trend of having personal vehicles for daily-based usage is increasing rapidly. As more and more people are buying vehicles, the traffic burden is increasing on the roads, causing accidents. When an accident happens, people get injured, and if the emergency services like medical aid are not given on time, then it may cause death. In the upcoming era, the idea of smart cities would be utilized, where every facility and service would be centralized and connected to a server; therefore, devices will be used to send a signal to the nearest emergency response center when an accident is detected on CCTV footage. This work reviews accident and accidental vehicle analysis through automated approaches. The areas of applications are highlighted along with the recent trends and practices discussed in this article.
Suleyman Suleymanzade
Azerbaijan Journal of High Performance Computing, Volume 4, pp 263-266;

This article presented a survey of two well-known algorithms, TF-IDF and BM-25 methods, for document ranking on a single CPU and parallel processes via HPC. An amazon review dataset with more than two million reviews was measured to measure the rank parameters. We set up the number of workers for the parallel processing during the experiment, which we selected as one and three. Four benchmarks evaluated the preprocess and reading time, vectorization time, TF-IDF transformation time, and overall time. Results metrics have shown a significant difference in speed.
Mutasem Alzoubaidi, University of Wyoming, Adli Al-Balbissi, Abdel Rahman Alzoubaidi, Amr Alzoubaidi, Baha Azzeh, Ahmed Al-Mansour, Ahmed Farid, The University of Jordan, Eastern Washington University, et al.
Azerbaijan Journal of High Performance Computing, Volume 4, pp 155-169;

This paper conducted an operational, economic analysis to assess alternative solutions to traffic congestion. They involved integrating adaptive traffic signal control (ATSC) with connected vehicle technology (ATSC-CV) and the application of various conventional and unconventional solutions. The studied conventional scenarios include signal timing optimization, signal actuation, and upgrading existing intersections to interchanges. There were unconventional scenarios involving converting two intersections to interchanges and the third to a continuous green-T intersection (CGTI). Different unconventional alternatives involved deploying ATSC-CV-based systems assuming varying market penetration rates (MPRs). The operational performance of each alternative was analyzed using VISSIM microsimulation software. To model the driving behavior of CVs, Python programming language was used through the COM interface in VISSIM. One-way analysis of variance (ANOVA) and post-hoc testing results indicate that implementing any suggested alternative would substantially decrease the mean vehicular travel time compared to the fixed signal control strategy currently implemented. Specifically, the ATSC-CVbased systems yielded notable travel time reductions ranging from 9.5% to 21.3%. Also, ANOVA results revealed that the highest benefit-to-cost ratio among all alternatives belonged to scenarios in which the MPRs of CVs were 100%. It was also found that ATSC-CV-based systems with MPRs of 25% and 50% would be as feasible as converting signalized intersections to underpass interchanges.
Mohammed Zidan, Mahmoud Abdel-Aty
Azerbaijan Journal of High Performance Computing, Volume 4, pp 48-52;

The algorithm that solves a generalized form of the Deutsch- Jozsa problem was proposed. This algorithm uses the degree of entanglement computing model to classify an arbitrary Oracle Uf to one of the 2n classes. In this paper, we will analyze this algorithm based on the degree of entanglement.
Ulphat Bakhishov, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 4, pp 126-131;

Distributed exascale computing systems are the idea of the HPC systems, that capable to perform one exaflop operations per second in dynamic and interactive nature without central managers. In such environment, each node should manage its own load itself and it should be found the basic rules of load distribution for all nodes because of being able to optimize the load distribution without central managers. In this paper proposed oscillation model for load distribution in fully distributed exascale systems and defined some parameters for this model and mentioned about feature works.
Antonio Manzalini
Azerbaijan Journal of High Performance Computing, Volume 4, pp 53-59;

Today, like never before, we are witnessing a pervasive diffusion of ultra-broadband fixed-mobile connectivity, the deployment of Cloud-native 5G network and service platforms, and the wide adoption of Artificial Intelligence. It has the so-called Digital Transformation of our Society: as a matter of fact, the transformative role of Telecommunications and Information Communication Technologies (ICT) has long been witnessed as a precursor of scientific progress and economic growth in the modern world. Nevertheless, this transformation is still laying its foundations on Electronics and the impending end of Moore’s Law: therefore, a rethinking of the long-term ways of doing computation and communications has been already started. Among these different ways, quantum technologies might trigger the next innovation breakthrough in the medium long-term. In this direction, the paper provides an overview of the state of the art, challenges, and opportunities posed by an expected second wave of quantum technologies and services.
Syed Rashiq Nazar, Tapalina Bhattasali
Azerbaijan Journal of High Performance Computing, Volume 4, pp 113-125;

Sentiment analysis is a process in which we classify text data as positive, negative, or neutral or into some other category, which helps understand the sentiment behind the data. Mainly machine learning and natural language processing methods are combined in this process. One can find customer sentiment in reviews, tweets, comments, etc. A company needs to evaluate the sentiment behind the reviews of its product. Customer sentiment can be a valuable asset to the company. This ultimately helps the company make better decisions regarding its product marketing and improving product quality. This paper focuses on the sentiment analysis of customer reviews from Amazon. The reviews contain textual feedback along with a rating system. The aim is to build a supervised machine learning model to classify the review as positive or negative. As reviews are in the text format, there is a need to vectorize the text to numerical format for the computer to process the data. To do this, we use the Bag-of-words model and the TF-IDF (Term Frequency-Inverse Document Frequency) model. These two models are related to each other, and the aim is to find which model performs better in our case. The problem in our case is a binary classification problem; the logistic regression algorithm is used. Finally, the performance of the model is calculated using a metric called the F1 score.
Mehshan Ahad, Muhammad Fayyaz
Azerbaijan Journal of High Performance Computing, Volume 4, pp 60-90;

Human gender recognition is one the most challenging task in computer vision, especially in pedestrians, due to so much variation in human poses, video acquisition, illumination, occlusion, and human clothes, etc. In this article, we have considered gender recognition which is very important to be considered in video surveillance. To make the system automated to recognize the gender, we have provided a novel technique based on the extraction of features through different methodologies. Our technique consists of 4 steps a) preprocessing, b) feature extraction, c) feature fusion, d) classification. The exciting area is separated in the first step, which is the full body from the images. After that, images are divided into two halves on the ratio of 2:3 to acquire sets of upper body and lower body. In the second step, three handcrafted feature extractors, HOG, Gabor, and granulometry, extract the feature vectors using different score values. These feature vectors are fused to create one strong feature vector on which results are evaluated. Experiments are performed on full-body datasets to make the best configuration of features. The features are extracted through different feature extractors in different numbers to generate their feature vectors. Those features are fused to create a strong feature vector. This feature vector is then utilized for classification. For classification, SVM and KNN classifiers are used. Results are evaluated on five performance measures: Accuracy, Precision, Sensitivity, Specificity, and Area under the curve. The best results that have been acquired are on the upper body, which is 88.7% accuracy and 0.96 AUC. The results are compared with the existing methodologies, and hence it is concluded that the proposed method has significantly achieved higher results.
Farshad Rezaei, Shamsollah Ghanbari
Azerbaijan Journal of High Performance Computing, Volume 4, pp 39-47;

Cloud computing is a new technology recently being developed seriously. Scheduling is an essential issue in the area of cloud computing. There is an extensive literature concerning scheduling in the area of distributed systems. Some of them are applicable for cloud computing. Traditional scheduling methods are unable to provide scheduling in cloud environments. According to a simple classification, scheduling algorithms in the cloud environment are divided into two main groups: batch mode and online heuristics scheduling. This paper focuses on the trust of cloud-based scheduling algorithms. According to the literature, the existing algorithm examinee latest algorithm is related to an algorithm trying to optimize scheduling using the Trust method. The existing algorithm has some drawbacks, including the additional overhead and inaccessibility to the past transaction data. This paper is an improvement of the trust-based algorithm to reduce the drawbacks of the existing algorithms. Experimental results indicate that the proposed method can execute better than the previous method. The efficiency of this method depends on the number of nods and tasks. The more trust in the number of nods and tasks, the more the performance improves when the time cost increases
Hafiz Gulfam Ahmad, Iqra Tahir, Naveed Naeem Abbas
Azerbaijan Journal of High Performance Computing, Volume 4, pp 91-112;

In the past few years, software development has seen rapid growth, and developers have adopted different methods to provide efficient procedures in software development, thus reducing the overall bug counts and time delay. Bidirectional model transformation is one such technique that encompasses the development of the object code in both directions enabling an abstract view of the software to the developer; over the year’s researchers, have been able to produce many approaches in bidirectional model transformations (bx), but the cost and best fir for effective model transformation, in particular, a quantities survey will be designed which will discuss the best possible apron in the bx. The methodology for this survey shall be made through SLR to identify around 20 different approaches proposed for bidirectional model transformation; these studies range from the year 2010 till date and are thus, rendered latest in the respective fields of our survey. The gathered results have been calculated on the specific set of parameters that are cost and time of usage time are the main aspects of these approaches, and that is the predicament that has made us produce a systematic literature review (SLR) on this very topic. Thus, this paper investigates different approaches based on their implementation cost and time delay and discusses their limitations, and the approach is implemented. Those approaches have been selected, which culminate in both of these respective parameters. The main objective of this SLR is to provide an insight into the different approaches and establish a well-balanced approach that can be used in bidirectional model transformation in software development.
Zdzislaw Polkowski, Sambit Kumar Mishra
Azerbaijan Journal of High Performance Computing, Volume 4, pp 3-14;

In a general scenario, the approaches linked to the innovation of large-scaled data seem ordinary; the informational measures of such aspects can differ based on the applications as these are associated with different attributes that may support high data volumes high data quality. Accordingly, the challenges can be identified with an assurance of high-level protection and data transformation with enhanced operation quality. Based on large-scale data applications in different virtual servers, it is clear that the information can be measured by enlisting the sources linked to sensors networked and provisioned by the analysts. Therefore, it is very much essential to track the relevance and issues with enormous information. While aiming towards knowledge extraction, applying large-scaled data may involve the analytical aspects to predict future events. Accordingly, the soft computing approach can be implemented in such cases to carry out the analysis. During the analysis of large-scale data, it is essential to abide by the rules associated with security measures because preserving sensitive information is the biggest challenge while dealing with large-scale data. As high risk is observed in such data analysis, security measures can be enhanced by having provisioned with authentication and authorization. Indeed, the major obstacles linked to the techniques while analyzing the data are prohibited during security and scalability. The integral methods towards application on data possess a better impact on scalability. It is observed that the faster scaling factor of data on the processor embeds some processing elements to the system. Therefore, it is required to address the challenges linked to processors correlating with process visualization and scalability.
Abdel Rahman Alzoubaidi, Mutasem Alzoubaidi, Ismaiel Abu Mahfouz, Taha Alkhamis, Fida Fuad Salim Al-​asali, Mohammad Alzoubaidi, Avera Medical Group Pulmonary & Sleep Medicine Sioux Falls
Azerbaijan Journal of High Performance Computing, Volume 4, pp 29-38;

Currently, universities have rising demands to apply the incredible recent developments in computer technology that support students to achieve skills necessary for developing applied critical thinking in the contexts of online society. Medical and engineering subjects’ practical learning and education scenarios are crucial to attain a set of competencies and applied skills. These recent developments allow sharing and resource allocation, which brings savings and maximize use, and therefore offer centralized management, increased security, and scalability. This paper describes the implantation of Virtual Desktop Infrastructure (VDI) to access the virtual laboratories that bring efficient use of resources as one of Al Balqa Applied University’s (BAU) Private Cloud services. The concept of desktop virtualization implements the sharing of capabilities utilizing legacy machines, which reduces the cost of infrastructure and introduces increased security, mobility, scalability, agility, and high availability. Al Balqa Applied University uses the service extensively to facilitate in/off-campus learning, teaching, and administrative activities and continue performing their work and education functions remotely to cope with the COVID-19 pandemic.
Vladislav Li, Georgios Amponis, Jean-Christophe Nebel, Vasileios Argyriou, Thomas Lagkas, Panagiotis Sarigiannidis
Azerbaijan Journal of High Performance Computing, Volume 4, pp 15-28;

Developments in the field of neural networks, deep learning, and increases in computing systems’ capacity have allowed for a significant performance boost in scene semantic information extraction algorithms and their respective mechanisms. The work presented in this paper investigates the performance of various object classification- recognition frameworks and proposes a novel framework, which incorporates Super-Resolution as a preprocessing method, along with YOLO/Retina as the deep neural network component. The resulting scene analysis framework was fine-tuned and benchmarked using the COCO dataset, with the results being encouraging. The presented framework can potentially be utilized, not only in still image recognition scenarios but also in video processing.
Romit S. Beed, St. Xavier’s College (Autonomous), Ankita Sarkar, Raya Sinha, Deboshruti Dasgupta
Azerbaijan Journal of High Performance Computing, Volume 3, pp 255-268;

Shelf space allocation has always remained a crucial issue for any retail store, as space is a limited resource. This work proposes a model that uses a hyper-heuristic approach to allocate products on shelves to maximize the retailer's profit. This work has concentrated on providing a solution specifically for a consumer packaged goods store. There exist multiple conflicting objectives and constraints which influence the profit. The consequence is a non-linear programming model having a complex objective function, which is solved by using multiple neighborhood approaches using simulated annealing as simulated annealing is a useful tool for solving complex combinatorial optimization problems. Detailed analysis of the proposed technique of using annealing and reheating has revealed the effectiveness in profit maximization in the shelf space allocation problem. Various simulated annealing parameters have been studied in this paper, which provides optimum values for maximizing profit.
Zeinab Sohrabi, Shahed University, Ehsan Mousavi Khaneghah
Azerbaijan Journal of High Performance Computing, Volume 3, pp 151-163;

Virtual machine-based process migrator mechanisms have the potential to be used in distributed exascale systems due to their ability to execute process execution and support environments with the heterogenous of the computational unit. The ability to reduce process suspension time and use the concept of live process migrator makes it possible to use this mechanism to transfer processes in distributed exascale systems to prevent related process activity failure. The performance function of a virtual machine-based process migrator mechanism cannot manage dynamic and interactive events and the effects of this event on the mechanism operation and the change in the basic concept of system activity from the concept of the process to the concept of global activity. This paper examines the challenges of dynamic and interactive event occurrence on virtual machine-based process migrators by analyzing VM-based migrator's performance function
Mausumi Das Nath, St. Xavier’s College (Autonomous), Tapalina Bhattasali
Azerbaijan Journal of High Performance Computing, Volume 3, pp 196-206;

Due to the enormous usage of the Internet, users share resources and exchange voluminous amounts of data. This increases the high risk of data theft and other types of attacks. Network security plays a vital role in protecting the electronic exchange of data and attempts to avoid disruption concerning finances or disrupted services due to the unknown proliferations in the network. Many Intrusion Detection Systems (IDS) are commonly used to detect such unknown attacks and unauthorized access in a network. Many approaches have been put forward by the researchers which showed satisfactory results in intrusion detection systems significantly which ranged from various traditional approaches to Artificial Intelligence (AI) based approaches.AI based techniques have gained an edge over other statistical techniques in the research community due to its enormous benefits. Procedures can be designed to display behavior learned from previous experiences. Machine learning algorithms are used to analyze the abnormal instances in a particular network. Supervised learning is essential in terms of training and analyzing the abnormal behavior in a network. In this paper, we propose a model of Naïve Bayes and SVM (Support Vector Machine) to detect anomalies and an ensemble approach to solve the weaknesses and to remove the poor detection results
Siddhartha Roy, Calcutta University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 234-244;

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used for security, safety, and also commercial aspects such as parking control access, and legal steps for the red light violation, highway speed detection, and stolen vehicle detection. The license plate of any vehicle contains a number of numeric characters recognized by the computer. Each country in the world has specific characteristics of the license plate. Due to rapid development in the information system field, the previous manual license plate number writing process in the database is replaced by special intelligent device in a real-time environment. Several approaches and techniques are exploited to achieve better systems accuracy and real-time execution. It is a process of recognizing number plates using Optical Character Recognition (OCR) on images. This paper proposes a deep learning-based approach to detect and identify the Indian number plate automatically. It is based on new computer vision algorithms of both number plate detection and character segmentation. The training needs several images to obtain greater accuracy. Initially, we have developed a training set database by training different segmented characters. Several tests were done by varying the Epoch value to observe the change of accuracy. The accuracy is more than 95% that presents an acceptable value compared to related works, which is quite satisfactory and recognizes the blurred number plate.
Nigar Ismayilova, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 190-195;

This paper examines the role of applying different artificial intelligence techniques for the implementation of load balancing in the dynamic environment of distributed multi-core computing systems. Were investigated several methods to optimize the assignment process between computing nodes and executing tasks after the occurrence of a dynamic and interactive event, when traditional discrete load balancing techniques are ineffective.
Shahed University, Ehsan Mousavi Khaneghah, Araz R. Aliev, Azerbaijan State University of Oil and Industry
Azerbaijan Journal of High Performance Computing, Volume 3, pp 164-180;

The resource discovery in Exascale systems should support the occurrence of dynamic nature in each stakeholder's elements in the resource discovery process. The occurrence of dynamic and interactive nature in the accountable computational element creates challenges in executing the activities related to resource discovery, such as the continuation of the response to the request, granting access rights, and the resource allocation to the process. In the case of a lack of management and dynamic and interactive event control in the accountable computational element, the process of activities related to the resource discovery will fail. In this paper, we first examine the concept function of resource discovery in the accountable computational element. Then, to analyze the effects of occurrence, dynamic, and interactive event effects on resource discovery function in the accountable computational element are discussed. The purpose of this paper is to analyze the use of the traditional resource discovery in the Exascale distributed systems and investigate the factors that should be considered in the function of resource discovery management to have the possibility of application in the Exascale distributed system.
Firuza Tahmazli-Khaligova, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 245-254;

In a traditional High Performance Computing system, it is possible to process a huge data volume. The nature of events in classic High Performance computing is static. However, distributed exascale system has a different nature. The processing big data in a distributed exascale system evokes a new challenge. The dynamic and interactive character of a distributed exascale system changes process’s status and system elements. This paper discusses the challenge of the big data attributes: volume, velocity, variety; how they influence distributed exascale system dynamic and interactive nature. While investigating the effect of the dynamic and interactive nature of exascale systems in computing big data, this research suggests the Markov chains model. This model constructs the transition matrix, which identifies system status and memory sharing. It lets us analyze convergence of the two systems. As a result both systems are explored by the influence of each other.
Samir Kuliev, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 207-222;

The paper studies the problem of synthesis of control of lumped sources for an object with distributed parameters based on discrete observation of the phase state at specific object points. We propose an approach in which the whole phase space at the observed points is preliminarily divided in some way into given subsets (zones). The synthesized controls are selected from the class of piecewise-constant functions, and their current values are determined by a subset of the phase space containing the population of current states of the object at the observed points, at which controls take constant values. Such synthesized controls are called zonal. We give a numerical technique for obtaining optimal values of zonal controls using efficient first-order optimization methods. To this purpose, we derive formulas for the gradient of the objective function in the space of zonal controls.
Tapalina Bhattasali, St. Xavier’s College (Autonomous)
Azerbaijan Journal of High Performance Computing, Volume 3, pp 181-189;

Wireless Geo-Sensor Network (GEONET) is suitable for critical applications in hostile environments due to its flexibility in deployment. However, low power geo-sensor nodes are easily compromised by security threats like battery exhaustion attacks, which may give rise to unavoidable circumstances. In this type of attack, the intruder forcefully resists legitimate sensor nodes from going into a low-power sleep state. So that compromised sensor nodes' battery power is drained out, and they stop working. Due to sensor nodes' limited capability, it is complicated to prevent a sensor node from this type of attack, which appears as innocent interaction. This paper proposes a secure GEONET model (SEGNET) based on a dynamic load distribution mechanism for a heterogeneous environment. It implements a hybrid detection approach using three modules for anomaly detection, intrusion confirmation, and decision making to reduce the probability of false detection.
Etibar Vazirov, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 223-233;

The combination of heterogeneous resources within exascale architectures guarantees to be capable of revolutionary compute for scientific applications. There will be some data about the status of the current progress of jobs, hardware and software, memory, and network resource usage. This provisional information has an irreplaceable value in learning to predict where applications may face dynamic and interactive behavior when resource failures occur. In this paper was proposed building a scalable framework that uses special performance information collected from all other sources. It will perform an analysis of HPC applications in order to develop new statistical footprints of resource usage. Besides, this framework should predict the reasons for failure and provide new capabilities to recover from application failures. We are applying HPC capabilities at exascale causes the possibility of substantial scientific unproductiveness in computational procedures. In that sense, the integration of machine learning into exascale computations is an encouraging way to obtain large performance profits and introduce an opportunity to jump a generation of simulation improvements.
Farid Jafarov, Azerbaijan State Oil and Industry University
Azerbaijan Journal of High Performance Computing, Volume 3, pp 139-146;

Pakpoom Mookdarsanit, Chandrakasem Rajabhat University, Lawankorn Mookdarsanit
Azerbaijan Journal of High Performance Computing, Volume 3, pp 75-93;

Nguyen Ha Huy Cuong, Vietnam-Korea University of Information and Communication Technology, Nguyen Trong Tung, Nguyen Van Hong Quang, Nguyen Nhat Tan, Ngo Quoc Huy, Trinh Cong Duy, The University of Da Nang – University of Foreign Language Studies, Department of Science and Technology of Quangnam Province, HALOVI Information Technology JSC, et al.
Azerbaijan Journal of High Performance Computing, Volume 3, pp 54-63;

Mohammad Saeid Safaei, Shamsollah Ghanbari, Zhanat Umarova, Zhalgasbek Iztayev, South Kazakhstan state university
Azerbaijan Journal of High Performance Computing, Volume 3, pp 3-14;

Muhammad Bayat, MajdRayan Intelligent Computing, Hasan Hani, University of Qom
Azerbaijan Journal of High Performance Computing, Volume 3, pp 15-31;

Snehal R. Rajput, Pandit Deendayal Petroleum University, Mehul S. Raval
Azerbaijan Journal of High Performance Computing, Volume 3, pp 119-138;

Page of 2
Articles per Page
Show export options
  Select all
Back to Top Top