Radio Electronics, Computer Science, Control

Journal Information
ISSN / EISSN : 1607-3274 / 2313-688X
Current Publisher: Zaporizhzhia National Technical University (10.15588)
Total articles ≅ 682
Current Coverage
INSPEC
ESCI
DOAJ
Archived in
SHERPA/ROMEO
Filter:

Latest articles in this journal

S. M. Babchuk, Т. V. Humeniuk, I. T. Romaniv
Radio Electronics, Computer Science, Control, Volume 1, pp 46-56; doi:10.15588/1607-3274-2021-1-5

Abstract:
Context. High-performance computing systems are needed to solve many scientific problems and to work with complex applied problems. Previously, real parallel data processing was supported only by supercomputers, which are very limited and difficult to access. Currently, one way to solve this problem is to build small, cheap clusters based on single-board computers Raspberry Pi. Objective. The goal of the work is the creation of a complex criterion for the efficiency of the cluster system, which could properly characterize the operation of such a system and find the dependences of the performance of the cluster system based on Raspberry Pi 3B+ on the number of boards in it with different cooling systems. Method. It is offered to apply in the analysis of small cluster computer systems the complex criterion of efficiency of work of cluster system which will consider the general productivity of cluster computer system, productivity of one computing element in cluster computer system, electricity consumption by cluster system, electricity consumption per one computing element, the cost of calculating 1 Gflops cluster computer system, the total cost of the cluster computer system. Results. The developed complex criterion of cluster system efficiency was used to create an experimental cluster system based on single-board computers Raspberry Pi 3B+. Mathematical models of the dependence of the performance of a small cluster system based on single-board computers Raspberry Pi 3B+ depending on the number of boards in it with different cooling systems have also been developed. Conclusions. The conducted experiments confirmed the expediency of using the developed complex criterion of efficiency of the cluster system and allow to recommend it for use in practice when creating small cluster systems. Prospects for further research are to determine the weights of the constituent elements of the complex criterion of efficiency of the cluster system, as well as in the experimental study of the proposed weights.
A. Ya. Bomba, I. P. Moroz, M. V. Boichura
Radio Electronics, Computer Science, Control, Volume 1, pp 14-28; doi:10.15588/1607-3274-2021-1-2

Abstract:
Context. P-i-n-diodes are widely used in a microwave technology to control the electromagnetic field. The field is controlled by the formation of an electron-hole plasma in the region of an intrinsic semiconductor (i-region) under the influence of a control current. The development of control devices on p-i-n-diodes has led to the emergence of integral p-i-n-structures of various types, the characteristics of which (for example, switching speed, switched power level, etc.) exceed the similar characteristics of volume diodes. The properties of p-i-n-structures are determined by a number of processes: the diffusion-drift charge transfer process, the recombination-generation, thermal, injection, and the so on. Obviously, these processes should be taken into account (are displayed) in the mathematical model of the computer-aided design system for control devices of a microwave systems. Integrated process accounting leads to the formulation of complex tasks. One of them is the task of optimizing the shape, geometric dimensions and placement of the injected contacts (an active region). Objective. The goal of the work is the development of a mathematical model and the corresponding software of the process of a microwave waves interaction with electron-hole plasma in an active region of the surface-oriented integral p-i-n-structures with ribbon-type freeform contacts to optimize an active region shape and its geometric dimensions. Method. The main idea of the developed algorithm is to use the conformal mapping method to bring the physical domain of the problem to canonical form, followed by solving internal boundary value problems in this area for the ambipolar diffusion equation and the wave equation using numerical-analytical methods (the finite difference method; partial domains method using projection boundary conditions similar to the Galerkin method). The optimization algorithm is based on a phased solution of the following problems (the shape and geometric dimensions of the active region are specified at each stage): a computational grid of nodes for the physical regions of the problem is being found, in an active region the carriers concentration distribution is being determined and the energy transmitted coefficient in the system under study is being calculated, which is used in the proposed optimization functional. The extreme values of the functional are found by the uniform search method. Results. The proposed mathematical model and the corresponding algorithm for optimizing the shape and geometric dimensions of the active region (i-region) of integrated surface-oriented p-i-n-structures expands the tool base for the design of semiconductor circuits of microwave frequencies (for example, similar to CST MICROWAVE STUDIO). Conclusions. An algorithm has been developed to optimize the shape and geometrical dimensions of the active region of integrated surface-oriented p-i-n-structures with in-depth contacts intended for switching millimeter-wave electromagnetic signals. The universality of the algorithm is ensured by applying the method of conformal transformations of spatial domains. The example of the application of the proposed algorithm to search for the optimal sizes of wedge-shaped (in cross-section) contacts of silicon structures is considered.
D. O. Progonov
Radio Electronics, Computer Science, Control, Volume 1, pp 184-193; doi:10.15588/1607-3274-2021-1-18

Abstract:
Context. The problem of sensitive information protection during data transmission in communication systems was considered. The case of reliable detection of stego images formed according to advanced embedding methods was investigated. The object of research is digital images steganalysis of adaptive steganographic methods. Objective. The goal of the work is performance analysis of statistical stegdetectors for adaptive embedding methods in case of preliminary noising of analyzed image with thermal and shot noises. Method. The image pre-processing (calibration) method was proposed for improving stego-to-cover ratio for state-of-the-art adaptive embedding methods HUGO, MG and MiPOD. The method is aimed at amplifying negligible changes of cover image caused by message hiding with usage of Gaussian and Poisson noises. The former one is related to influence the thermal noise of chargecoupled device (CCD) based image sensor during data acquisition. The latter one is related to shot noise that originates from stochastic process of electron emission by photons hitting of CCD elements. During the research, parameters of thermal noise were estimated with two-dimensional Wiener filter, while sliding window of size 5·5 pixels was used for parameters evaluation for shot noise. Results. The dependencies of detection error on cover image payload for advance HUGO, MG and MiPOD embedding methods were obtained. The results were presented for the case of image pre-noising with both Gaussian and Poisson noises, and varying of feature pre-processing methods. Conclusions. The conducted experiments confirmed effectiveness of proposed approach for image calibration with Poisson noise. Obtained results allow us to recommend linearly transformed features to be used for improving stegdetector performance by natural image processing. The prospects for further research may include investigation usage of special noises, such as fractal noises, for improving stego-to-cover ratio for advanced embedding methods.
P. S. Nosov, V. V. Cherniavskyi, S. M. Zinchenko, I. S. Popovych, Ya. А. Nahrybelnyi, H. V. Nosova
Radio Electronics, Computer Science, Control, Volume 1, pp 208-223; doi:10.15588/1607-3274-2021-1-20

Abstract:
Context. The article introduces an approach for analyzing the reactions of a marine electronic navigation operator as well as automated identification of the likelihood of the negative impact of the human factors in ergatic control systems for sea transport. To meet the target algorithms for providing information referring to the results of human-machine interaction of an operator in marine emergency response situations while managing increasing complexity of navigation operations’ carrying out are put forward. Objective. The approach delivers conversion of the operator’s actions feature space into a logical-geometric one of p-adic systems making the level of the operator’s intellectual activity by using automated means highly likely to be identified. It is sure to contribute to its dynamic prediction for the sake of further marine emergency situations lessening. Method. Within the framework of the mentioned above approach attaining objective as automated identification of the segmented results of human-machine interactions a method for transforming deterministic fragments of an operator’s intellectual activity in terms of p-adic structures is proposed to be used. To cope with such principles as specification, generalization as well as transitions to different perception spaces of the navigation situation by the operator are said to be formally specified. Having been carried out of simulation modeling has turned out to confirm the feasibility of the proposed above approach causing, on the grounds of temporary identifiers, the individual structure of the operator’s reactions to be determined. As a result, the data obtained has delivered the possibility of having typical situations forecasted by using automated multicriteria methods and tools. This issue for its part is said to be spotted as identification of individual indicators of the operator’s reaction dynamics in complex man-machine interaction. Results. In order to have the proposed formal-algorithmic approach approved an experiment was performed using the navigation simulator Navi Trainer 5000 (NTPRO 5000). Automated analysis of experimental server and video data have furnished the means of deterministic operator actions identification in the form of metadata of the trajectory of his reactions within the space of p-adic structures. Thus, the results of modeling involving automated neural networks are sure to facilitate the time series of the intellectual activity of the electronic marine navigation operator to be identified and, therefore, to predict further reactions with a high degree of reliability. Conclusions. The proposed formal research approaches combined with the developed automated means as well as algorithmic and methodological suggestions brought closer to the objectives for solving the problem of automated identification of the negative impact of the human factors of the electronic navigation operator on a whole new level. The efficiency of the proposed approach is noticed to have been approved by the results of automated processing of experimental data and built forecasts.
S. S. Shevelev
Radio Electronics, Computer Science, Control, Volume 1, pp 194-207; doi:10.15588/1607-3274-2021-1-19

Abstract:
Context. Modern general purpose computers are capable of implementing any algorithm, but when solving certain problems in terms of processing speed they cannot compete with specialized computing modules. Specialized devices have high performance, effectively solve the problems of processing arrays, artificial intelligence tasks, and are used as control devices. The use of specialized microprocessor modules that implement the processing of character strings, logical and numerical values, represented as integers and real numbers, makes it possible to increase the speed of performing arithmetic operations by using parallelism in data processing. Objective. To develop principles for constructing microprocessor modules for a modular computing system with a reconfigurable structure, an arithmetic-symbolic processor, specialized computing devices, switching systems capable of configuring microprocessors and specialized computing modules into a multi-pipeline structure to increase the speed of performing arithmetic and logical operations, high-speed design algorithms specialized processors-accelerators of symbol processing. To develop algorithms, structural and functional diagrams of specialized mathematical modules that perform arithmetic operations in direct codes on neural-like elements and systems for decentralized control of the operation of blocks. Method. An information graph of the computational process of a modular system with a reconstructed structure has been built. Structural and functional diagrams, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words have been developed. Software has been developed for simulating the operation of an arithmetic-symbolic processor, specialized computing modules, and switching systems. Results. A block diagram of a reconfigurable computing modular system has been developed, which consists of compatible functional modules, it is capable of static and dynamic reconfiguration, has a parallel structure for connecting the processor and computing modules through the use of interface channels. The system consists of an arithmetic-symbolic processor, specialized computing modules and switching systems, performs specific tasks of symbolic information processing, arithmetic and logical operations. Conclusions. The architecture of reconfigurable computing systems can change dynamically during their operation. It becomes possible to adapt the architecture of a computing system to the structure of the problem being solved, to create problem-oriented computers, the structure of which corresponds to the structure of the problem being solved. As the main computing element in reconfigurable computing systems, not universal microprocessors are used, but programmable logic integrated circuits, which are combined using high-speed interfaces into a single computing field. Reconfigurable multipipeline computing systems based on fields are an effective tool for solving streaming information processing and control problems.
P. Kravets, V. Lytvyn, V. Vysotska
Radio Electronics, Computer Science, Control, Volume 1, pp 172-183; doi:10.15588/1607-3274-2021-1-17

Abstract:
Context. In today’s information society with advanced telecommunications through mobile devices and computer networks, it is important to form a variety of virtual organizations and communities. Such virtual associations of people by professional or other interests are designed to quickly solve various tasks: to perform project tasks, create startups to attract investors, network marketing, distance learning, solving complex problems in science, economics and public administration , construction of various Internet services, discussion of political and social processes, etc. Objective of the study is to develop an adaptive Markov recurrent method based on the stochastic approximation of the modified condition of complementary non-rigidity, valid at Nash equilibrium points for solving the problem of game coverage of projects. Method. In this work the multiagent game model for formation of virtual teams of executors of projects on the basis of libraries of subject ontologies is developed. The competencies and abilities of agents required to carry out projects are specified by sets of ontologies. Intelligent agents randomly, simultaneously and independently choose one of the projects at discrete times. Agents who have chosen the same project determine the current composition of the team of its executors. For agents’ teams, a current penalty is calculated for insufficient coverage of competencies by the combined capabilities of agents. This penalty is used to adaptively recalculate mixed player strategies. The probabilities of selecting those teams whose current composition has led to a reduction in the fine for non-coverage of ontologies are increasing. During the repetitive stochastic game, agents will form vectors of mixed strategies that will minimize average penalties for non-coverage of projects. Results. For solve the problem of game coverage of projects, an adaptive Markov recurrent method based on the stochastic approximation of the modified condition of complementary non-rigidity, valid at Nash equilibrium points, was developed. Conclusions. Computer simulation confirmed the possibility of using the stochastic game model to form teams of project executors with the necessary ontological support in conditions of uncertainty. The convergence of the game method is ensured by compliance with the fundamental conditions and limitations of stochastic optimization. The reliability of experimental studies is confirmed by the repeatability of the results obtained for different sequences of random variables.
L. A. Kleiman, V. I. Freyman
Radio Electronics, Computer Science, Control, Volume 1, pp 158-171; doi:10.15588/1607-3274-2021-1-16

Abstract:
Context. In the modern world, information management systems have become widespread. This make it possible to automate the technological processes of enterprises of various sizes. Many information management systems include wireless and autonomous elements. Autonomy, in this case, means the ability of the system elements to function for a certain time without additional energy supply. In this regard, such a parameter of operational reliability as the battery life of a system element becomes one of the most important. One of the main tools for improving the reliability and fault tolerance of information management system elements – is the use of a modern diagnostic system. Objective. The aim of the work is to develop a method for increasing the reliability of the functioning of autonomous elements of information management systems. It includes the creation of a model of an information management system and an algorithm for reasonable redistribution of diagnostic functions, as well as a software implementation of the developed algorithm, which confirms its higher reliability indicators in comparison with other algorithms. Methods. The basic model was the Preparata-Metz-Chen model. On its basis, a new model of the system was built, including the structural and logical description of the elements and the determination of the way of their interaction. The elements were classified by the degree of criticality of the functions performed in the system. On the basis of the developed model and description of the elements, an algorithm was developed for the reasonable redistribution of the diagnostic load, which made it possible to reduce the average energy consumption of the elements and thereby improve the reliability indicators. A software implementation of the developed algorithm was created, which allows to numerically evaluate its advantages. The developed and existing algorithms were compared. Results. A model of information management system has been developed. In such a system, it is proposed to use an integrated test diagnostics system. This diagnostic system implements algorithms for redistributing the diagnostic load. To determine the importance of the characteristics taken into account, a linear criterion was chosen, as the most studied and fastest in application. A software model, that implements the developed algorithm and makes it possible to compare it with existing algorithms, has been developed. A study of the software model with various parameters was carried out and, based on the results of the software simulation, conclusions were drawn about the possibilities of improving the algorithm and directions for further scientific research were formulated. Conclusions. The usage of the developed algorithm makes it possible to increase such a characteristic of the reliability of the elements of the information and control system as the mean time of failure-free operation (mean time between failures) by increasing the operating time of autonomous elements without recharging. When carrying out software modeling of the developed and existing algorithms, the advantages of the first were confirmed, and theoretical possibilities for its improvement were formulated.
G. M. Babeniuk
Radio Electronics, Computer Science, Control, Volume 1, pp 144-157; doi:10.15588/1607-3274-2021-1-15

Abstract:
Context. The main purpose of Correlation Extremal Navigation system is finding coordinates in case of absence of Global Positioning System signal and as a result high-accuracy maps as the main source of information for finding coordinates are very important. Magnetic field map as the main source of information can include errors values, as an example: not good enough equipment or human factor can cause error value of measurements. Objective. In order to create high-accuracy maps given work proposes to improve the process of creating magnetic field maps. The given work represents delay tolerant networking as an additional approach for data transmission between magnetic observatory and magnetic station and its improvement. Method. Improved Dijkstra’s algorithm together with Ford-Fulkerson’s algorithm for finding path with minimum capacity losses, earliest delivery time and maximum bit rate in case of overlapping contacts should be represented in the given work because nowadays, delay tolerant networking routing protocols do not take into account the overlap factor and resulting capacity losses and it leads to big problems Results. For the first time will be presented algorithm that chooses the route that guarantees the minimum of capacity losses, earliest delivery time and maximum bit rate in the delay tolerant networking with overlapping contacts and increases the probability of successful data transmission between magnetic stations and magnetic observatories. Conclusions. In order to perform high-accuracy measurement of magnetic field group of people allocate their equipment for magnetic field measurement in remote areas in order to avoid the influence of environment on measurements of magnetometer. Since magnitude of magnetic field can vary dependent on temperature, proximity to the ocean, latitude (diurnal variation of magnetic field) and magnetic storms magnetic station from time to time adjusts its measurements with a help of reference values of magnetic field (magnetic station sends request for reference values to magnetic observatory). The problem of the given approach is that remote areas usually are not covered by network (no Internet) and as a result the adjustment of measurements is impossible. In order to make adjustment of measurements possible and as a result improve accuracy of magnetic maps given work proposed the usage of Delay Tolerant Networking that delivers internet access to different areas around the world and represented its improvement to make its approach even better.The results are published for the first time.
M. A. Novotarskyi, S. G. Stirenko, Y. G. Gordienko, V. A. Kuzmych
Radio Electronics, Computer Science, Control, Volume 1, pp 136-143; doi:10.15588/1607-3274-2021-1-14

Abstract:
Context. Machine learning is one of the actively developing areas of data processing. Reinforcement learning is a class of machine learning methods where the problem involves mapping the sequence of environmental states to agent’s actions. Significant progress in this area has been achieved using DQN-algorithms, which became one of the first classes of stable algorithms for learning using deep neural networks. The main disadvantage of this approach is the rapid growth of RAM in real-world tasks. The approach proposed in this paper can partially solve this problem. Objective. The aim is to develop a method of forming the structure and nature of access to the sparse distributed memory with increased information content to improve reinforcement learning without additional memory. Method. A method of forming the structure and modification of sparse distributed memory for storing previous transitions of the actor in the form of prototypes is proposed. The method allows increasing the informativeness of the stored data and, as a result, to improve the process of creating a model of the studied process by intensifying the learning of the deep neural network. Increasing the informativeness of the stored data is the result of this sequence of actions. First, we compare the new transition and the last saved transition. To perform this comparison, this method introduces a rate estimate for the distance between transitions. If the distance between the new transition and the last saved transition is smaller than the specified threshold, the new transition is written in place of the previous one without increasing the amount of memory. Otherwise, we create a new prototype in memory while deleting the prototype that has been stored in memory the longest. Results. The work of the proposed method was studied during the solution of the popular “Water World” test problem. The results showed a 1.5-times increase in the actor’s survival time in a hostile environment. This result was achieved by increasing the informativeness of the stored data without increasing the amount of RAM. Conclusions. The proposed method of forming and modifying the structure of sparse distributed memory allowed to increase the informativeness of the stored data. As a result of this approach, improved reinforcement learning parameters on the example of the “Water World” problem by increasing the accuracy of the model of the physical process represented by a deep neural network.
Ye. V. Bodyanskiy, A. Yu. Shafronenko, I. N. Klymova
Radio Electronics, Computer Science, Control, Volume 1, pp 97-104; doi:10.15588/1607-3274-2021-1-10

Abstract:
Context. In most clustering (classification without a teacher) tasks associated with real data processing, the initial information is usually distorted by abnormal outliers (noise) and gaps. It is clear that “classical” methods of artificial intelligence (both batch and online) are ineffective in this situation.The goal of the paper is to propose the procedure of fuzzy clustering of incomplete data using credibilistic approach and similarity measure of special type. Objective. The goal of the work is credibilistic fuzzy clustering of distorted data, using of credibility theory. Method. The procedure of fuzzy clustering of incomplete data using credibilistic approach and similarity measure of special type based on the use of both robust goal functions of a special type and similarity measures, insensitive to outliers and designed to work both in batch and its recurrent online version designed to solve Data Stream Mining problems when data are fed to processing sequentially in real time. Results. The introduced methods are simple in numerical implementation and are free from the drawbacks inherent in traditional methods of probabilistic and possibilistic fuzzy clustering data distorted by abnormal outliers (noise) and gaps. Conclusions. The conducted experiments have confirmed the effectiveness of proposed methods of credibilistic fuzzy clustering of distorted data operability and allow recommending it for use in practice for solving the problems of automatic clusterization of distorted data. The proposed method is intended for use in hybrid systems of computational intelligence and, above all, in the problems of learning artificial neural networks, neuro-fuzzy systems, as well as in the problems of clustering and classification.
Back to Top Top