Refine Search

New Search

Results in Naukma Research Papers. Computer Science: 85

(searched for: container_group_id:94645)
Page of 2
Articles per Page
by
Show export options
  Select all
Naukma Research Papers. Computer Science, Volume 5, pp 97-107; https://doi.org/10.18523/2617-3808.2022.5.97-107

Abstract:
The article proposes the concept of a platform for the development, accumulation and use of specialized applications – bots that automate functions related to informing, ordering and fulfilling orders, the implementation of multi-stage processes using capabilities of social networks and messenger programs. Individual stages of these processes depend on various circumstances, most important and influential being events and features of participants who are abonents of the said social networks and users of messengers. Differences in such features and circumstances affect complexity, structure and overall composition of the whole application, determining the entire end-to-end flow of the development process. Due to that, creation of the said applications require thorough planning and coherent, thought-out approach to conduction of design work on stages crucial to the whole multistage process. Based on those assumptions, a general approach to creating bots using formal models is described, including usage of state machines, logical models, and descriptions of business processes. Diagram specifications are built based on analysis of business processes to facilitate the conduction of implementation of proposed bot applications. In the platform implementation plan, the practical implementation of the component is proposed, which ensures the construction of the logic for processing user actions within the implementation of the given business process in accordance with the diagram specification. An example of using a practical implementation of a component to create a bot is described to better illustrate peculiarities and features of individual process stages, implementation of bot applications and flow of development as a whole. Development of a platform that is to be composed of such applications is envisioned.
Naukma Research Papers. Computer Science, Volume 5, pp 79-84; https://doi.org/10.18523/2617-3808.2022.5.79-84

Abstract:
With the aim of identifying and developing gifted students, providing them with assistance in choosing a profession and involving them in scientific research and research activities, the National Center “Junior Academy of Sciences of Ukraine” has initiated and annually holds a number of All-Ukrainian competitions of creative and intellectual direction: “Junior Erudite”, “Junior Researcher”, “Future of Ukraine”, “Ecoview” and many others. Among them, the most popular and representative in terms of the composition of its participants is the “Contest-presentation of scientific research projects”. The competition takes place in several stages, about 100,000 high school students from all over Ukraine take part in it, of which more than 1,000 of the best make it to the finals. The rules of the competition provide that a jury is created for each scientific section (of which there are 65), in which the corresponding stage of the competition is held. The members of the jury independently evaluate the research works of schoolchildren. Winners are determined in each scientific section separately by the sum of points scored by participants in all sections of the program. The final result (rating score) of each participant is calculated taking into account the weight of each component of the factor-criterion model, according to which the contestants’ achievements are evaluated, that is, the participants (alternatives) are actually ranked according to a set of indicators of different importance, which have a hierarchical structure. In order to ensure the access of the jury members to all the materials of scientific and research achievements of the contestants, as well as to carry out their effective evaluation and calculation of the final results, the information and analytical platform (IAP) POLYHEDRON-Competition was created.This article talks about the created computer system – an information and analytical platform that ensures the effective work of experts (jury members) in reviewing and evaluating scientific research materials submitted for defense by participants of intellectual contests. The system is deployed on the basis of an interactive document, which is a variant of the ontology-controlled system, and its work is illustrated on the example of the contest-presentation of scientific research projects.
Naukma Research Papers. Computer Science, Volume 5, pp 4-11; https://doi.org/10.18523/2617-3808.2022.5.4-11

Abstract:
A simple procedural programming language is considered, each program of which can input integer values, process them and output new integer values as result. A program is a block with description of local integer variables and procedures and a list of statements. The language has data processing statements: assignment, input, output, conditional, loop, procedure call and block. Main purpose of the block is to enter local data (integer variables and procedures) that are used in the body of the block – a list of operators. The scope of the name of the local data described in the block is the text of the block except for nested blocks, where this name is redefined. A mechanism of automatic memory allocation for variables entered in the block is also associated with the block. Memory for local variables is allocated when entering a block, and freed when exiting a block. A block containing only a list of statements is valid. The procedure has a name, list of formal parameters, and a body – a statement (most often a block). Formal parameters are applied only in its body. A procedure is calculated by the procedure call statement, whose actual parameters are only variables. Parameters are passed by reference (pass-by-reference).A formal specification of a programming language is a description of its syntax and semantics. A concrete syntax, finite set of rules, singles out syntactically correct sequences of symbols of the alphabet of language. To describe the semantics of a language, as a rule, abstract syntax is used, adding contextual conditions to it. The task of semantics is to introduce the denotations (“meanings”) of the basic constructions of language and semantic functions that build the denotations of complex syntactic constructions based on the denotations of their components, including the program.The article provides a specification of a procedural programming language that uses the extended Backus-Naur form to describe a concrete syntax, and the tools of the functional language Haskell to describe other parts. Abstract syntax is defined by the types Program, Proc, Stmt, Expr and Op. Additional contextual conditions are predicates that use information about program data. Most of the context conditions are related to the correct use of data in the program. The leading predicate that checks the context conditions of the program pr is iswfProgram pr.The language denotations are based on the Work type. The value of this type – a tuple (inp, stg, out) models the environment in which the language program is executed: inp - input data, stg – memory containing variable values, out – resulting data. The semantics of main constructions procedure, statement and expression are functions of the type Work -> Work or Work -> Integer. The semantics of the program is a function of the type [Integer] -> [Integer]. Semantic functions build these denotations according to syntactic constructions, which are described by abstract syntax – Proc, Stmt, Expr, Program types. The semantics of the program (Program) pr is built by function iProgram pr.All functions: contextual conditions, denotations and semantic functions are pure functions. Using Haskell tools, a function called parsePLL is built, which connects concrete and abstract syntax. It is shown how by combining the functions parsePLL, iswfProgram and iProgram you can get a procedural language – a pure function with the name interpret.
, Yury Yushchenko
Naukma Research Papers. Computer Science, Volume 5, pp 72-78; https://doi.org/10.18523/2617-3808.2022.5.72-78

Abstract:
In today’s world, where a car is present in almost every family, the parking problem plays an extremely important role. Parking is one of the most important factors in modern transport infrastructure, because it allows to save the time of both drivers and passengers, to increase the level of comfort and safety of road trips. In Ukraine, this problem is especially relevant, since nowadays it is going through the process of improving its parking infrastructure.The paper examines the problem of parking in large cities, proposes a system for recognizing occupancy of parking spots using computer vision. Such system would use camera feed to track the occupancy of each parking space within a slot. Its benefits would include ease of scalability, saving time of drivers and passengers, automation of parking payment and detection of unpaid parkings. In addition, it makes it possible to easily collect statistics about the busyness of various areas throughout the day or week.The paper also describes the algorithm of classifying the parking spot, as well as a possible architecture that the system may have.Possible problems in training a computer vision model for building the proposed system are considered. Firstly, the available parking datasets are lacking images collected in snow conditions or during nighttime. The hypothesized solution is to use vehicle detection datasets, the number of which that are publicly available is considerably bigger. Another problem is that classification accuracy drops drastically when using different images in train and test dataset. The hypothesized solution here is to apply incremental learning to improve the model as it is being used in a real-life scenario.
Naukma Research Papers. Computer Science, Volume 5, pp 49-53; https://doi.org/10.18523/2617-3808.2022.5.49-53

Abstract:
The task of developing effective text information classification systems requires the thoughtful analysis and synthesis of variable components of technology. These components strongly affect the practical efficiency and the requirements to the data. For this purpose, a typical technology was discussed, comparing the regular “learning from features” approach versus the more advanced “deep learning” approach, that studies from data. In order to implement the technology, the first approach was tested, which included the means (methods, algorithms) for analysis of the features of the source text, by applying the dimensionality transformation, and building model solutions that allow the correct classification of data by a set of features. As a result, all the steps of the technology are described, which allowed to determine the way of presenting data in terms of hidden features in data, their presentation in a standard visual form and evaluate the solution, as well as its practical efficiency, based on this set of features. In a depth study, the informational core of the document was studied, using the regression and T-stochastic grouping of features for dimensionality reduction.The separate results contain estimation of practical efficiency of the algorithms in terms of time and relative performance for each step of the proposed technology. This estimation gives a possibility to obtain the best algorithm of intelligent data processing that is useful for a given dataset and application. In order to estimate the best suited algorithm for separation in reduced dimension an experiment was carried out which allowed the selection of the best range of data classification algorithms, in particular boosting methods. As a result of the analysis of the technology, the necessary steps of this technology were discussed and the classification on real text data was conducted, which allowed to identify the most important stages of the technology for text classification.
, San Byn Nhuien
Naukma Research Papers. Computer Science, Volume 5, pp 68-71; https://doi.org/10.18523/2617-3808.2022.5.68-71

Abstract:
Humanity generates considerable information using its devices – smartphones, laptops, and tablets. Users upload images to different platforms, such as social networks, messengers, web services and other applications, which greatly endanger their personal information. User privacy has been exploited on the Internet for a long time. Interested parties lure potential customers into a trap of offers and services using such information as age, weight, nationality, religion and preferences. The sensitive information that may be contained in personal images is sometimes not recognized by their users as dangerous to share and, therefore, can easily be shared online by the owner without a second thought.This article inspects a neural hash algorithm for solving image classification tasks of confidential information and evaluates it via basic metrics. The main idea of the algorithm is to find similar images that will serve as an example for defining classes. The algorithm uses hash codes, ensuring users’ privacy. The evaluation of the algorithm is based on “The Visual Privacy (VISPR) Dataset”. The main components of the algorithm are a neural network that generates vectors of extracted features for images and an indexed set of images (hash tables) that store knowledge about a particular domain.The critical aspect of the algorithm involves collisions of hash codes for similar images due to the similarity of their vectors of extracted features. The resulting hash codes can be identical or differ by a specific value of Hamming distance. Multiple hash tables with different hash functions are used to increase the recall or precision of the results. The effect of imperfect taxonomy was analyzed, which led to further filtration of abstract classes and increasing overall scores.Also, the article investigates the “pseudo-adaptivity” of the algorithm - the ability to classify new classes and add new cases to existing classes that were not included in the training stages. Such ability may be crucial for domains with many image instances or classes.
Naukma Research Papers. Computer Science, Volume 5, pp 85-91; https://doi.org/10.18523/2617-3808.2022.5.85-91

Abstract:
A problem of improving consistency of pairwise comparisons matrices in application to ranking given alternatives is considered in the paper. But it can be shown that consistency is not the only issue as to the quality of pairwise comparisons. Given an arbitrary positive square matrix, we can obtain an ideally consistent pairwise comparison matrix with the same Perronian vector. Therefore, the quality of experts’ judgements is an issue of great importance as well.Technically, an approach to improving consistency of pairwise comparisons on the basis of solving a linear algebraic equations system is suggested. The system contains two groups of equations. One of them represents experts’ judgments, and the other is related to demands of cardinal consistency. Such a system can be over- or maybe underdetermined, and it typically can be inconsistent. Then a pseudo-solution can be obtained by means of pseudo-inverse Moore-Penrose matrix.For improving the quality of pairwise comparisons, it appears urgent to take into account reliabilities of certain judgements by giving them appropriate weight coefficients.Some numerical examples are provided in the paper. The first is a simple basic example without any serious inconsistencies. The second illustrates as to treat incomplete pairwise comparison matrices. And the latest illustrates possible expert’s manipulation, when an expert wants to secure the winning of a certain alternative whereas they don’t want to postulate the advantage of this alternative implicitly, and this results in the order violation. It is illustrated how introducing weight coefficients of equations can help counteract such manipulations.
Semen Gorokhovskyi,
Naukma Research Papers. Computer Science, Volume 5, pp 41-44; https://doi.org/10.18523/2617-3808.2022.5.41-44

Abstract:
Bicycle security systems have not developed as much as home security, and it is difficult to find competitive examples when researching the market. Many security systems on the market have weaknesses that can be bypassed or are not convenient to use. The technologies used to protect bicycles are rather uniform, predictable and not reliable. Most of such systems do not have convenient means of monitoring, such as, for example, a mobile application. Improvement of these systems, introduction of new technologies is very relevant in the field of bicycle protection. This is due to the unpopularity of these systems, their unreliability and lack of control over the phone. The majority of bicycle users are inclined to use proven methods – bicycle locks. But this decision is wrong.The system with GPS is so easy not to be deceived – it has more than one level of protection, and quickly warns the user about a threat. It has deterrents and means of attracting the attention of others.In addition, the use of GSM technology facilitates the possibility of control through a mobile application, which simplifies work with the system.Using GPS is the best way to monitor the position of the bicycle in space, and to track movement in unpredictable circumstances. GPS opens a number of possibilities and increases the functionality of the system. From monitoring the situation of the protection object to collecting statistics].The GSM module is almost never used in bicycle security systems. This is due to the concept of bike guarding, which says why use the ability to transmit data to any corner of the world if the user does not move more than 100 meters from the guarded object. But this concept is wrong. GSM is one of the fastest solutions among analogs. But transmission speed is not the only criterion for information transmission in wireless systems.Since the bicycle is a moving object, and the security system must be wireless, an important criterion for the functioning of such a system is the operating time.This article deals with the problem of protecting a moving object, using GSM and GPS modules. The main features of existing systems in this area, their advantages and disadvantages are shown. The advantages of using a radio protocol for bicycle protection are given. A model of the system that meets the needs of the user has been developed.
Naukma Research Papers. Computer Science, Volume 5, pp 92-96; https://doi.org/10.18523/2617-3808.2022.5.92-96

Abstract:
The work investigates a mathematical model of a two-stage transportation problem for finding the most economical plan for the transportation of homogeneous products from suppliers to consumers, where the demands of consumers are unknown, taking into account constraints on their lower and upper bounds. It is an extension of the classic two-stage transportation problem, where products are transported from suppliers to consumers only through intermediate points. Intermediary firms and various storage facilities (warehouses) can be such intermediate points.The relationship of the developed mathematical model with the two-stage continuous-discrete problem of optimal partitioning-distribution, which is characterized by the presence of two stages, is investigated. The problem consists in determining the areas of collection of the continuously distributed resource (raw material) by enterprises of the first stage and the volumes of transportation of the processed product from the enterprises of the first stage to consumers (points of the second stage), in order to minimize the total costs of transportation of the resource from suppliers to consumers through processing points (collection points, storage points).The material of the article is presented in two sections. Section 1 describes the mathematical model of the two-stage transportation problem with unknown consumer demands and provides the necessary and sufficient conditions for the compatibility of the system of linear constraints. It is shown that its special case coincides with the classic two-stage transportation problem.Section 2 provides a description of the model problem of optimal partitioning-distribution for the continuous area Ω and the discrete analog of the model problem. The results of computational experiments for a rectangular area Ω = {x = (x(1), x(2)) : 0 ≤ x(1) ≤ 1, 0 ≤ x(2) ≤ 1} with discretizations by grids 31 × 31 and 500 × 500 are presented. Optimal plans for transportation of processed product from points of the first stage to points of the second stage for both grids were found. The average time spent by the Gurobi solver to solve problems for the second grid, where the number of variables equals 250018 and the number of constraints equals 250009, is a few seconds on modern PCs.
Andrii Hlybovets,
Naukma Research Papers. Computer Science, Volume 5, pp 16-25; https://doi.org/10.18523/2617-3808.2022.5.16-25

Abstract:
The purpose of this work is to describe the methodology of building a software system (application) for plagiarism checking of scientific publications in the Ukrainian language using two machine learning models, Word2Vec and BERT. We consider the detection of external plagiarism in Ukrainian texts.Plagiarism is usually defined as the passing off someone else’s ideas as your own. As the Internet becomes more and more accessible every day, a huge amount of data becomes available to people. Nowadays, it is quite easy to find a suitable study and plagiarize it instead of developing one’s own from scratch.Plagiarism undermines the efforts of the researcher whose work has been plagiarized and gives the plagiarist the opportunity to over-praise himself; such a person can be detrimental when appointed to an important position.Many fields of life are susceptible to plagiarism, including research and education. Plagiarism can also take many forms: from straight up copy-paste to paraphrasing and sentence restructuring. This makes plagiarism a rather complex problem, where methods, such as longest common subsequence or n-grams, based on finding shared words between documents, might not work. Therefore, we might consider applying deep learning to the problem of plagiarism detection.In this article we discussed the concept of plagiarism and listed its types. Two machine learning models have been proposed for plagiarism detection: Word2Vec and BERT. We also provided an overview of both models and described how they could be used in the problem of plagiarism detection.A web application for plagiarism detection in the Ukrainian language has been developed. This application features React, a JavaScript framework, on the frontend and Python on the backend. To store application data, MongoDB is used.This application allows a user to input a text that will be compared with the texts from the application database using cosine similarity or Euclidean distance as metrics. Comparison is performed using word embeddings, calculated by pre-trained BERT or Word2Vec model. A user can choose the model and similarity metrics using the application’s UI.The application can be further improved to not only output similarity metric but also highlight the similar sentences in the texts.
Naukma Research Papers. Computer Science, Volume 5, pp 12-15; https://doi.org/10.18523/2617-3808.2022.5.12-15

Abstract:
The article presents a system capable of generating new ontologies or supplementing existing ones based on articles in Ukrainian. Ontologies are described and an algorithm suitable for automated concept extraction from natural language texts is presented.Ontology as a technology has become an increasingly important topic in contemporary research. Since the creation of the Semantic Web, ontology has become a solution to many problems of understanding natural language by computers. If an ontology existed and was used to analyze documents, then we would have systems that could answer very complex queries in natural language. Google’s success showed that loading HTML pages is much easier than marking everything with semantic markup, wasting human intellectual resources. To find a solution to this problem, a new direction in the ontological field, called ontological engineering, has appeared. This direction began to study ways of automating the generation of knowledge, which would be consolidated by an ontology from the text.Humanity generates more data every day than yesterday. One of the main levers today in the choice of technologies for the implementation of new projects is whether it can cope with this flow of data, which will increase every day. Because of this, some technologies come to the fore, such as machine learning, while others recede to the periphery, due to the impossibility or lack of time to adapt to modern needs, as happened with ontologies. The main reason for the decrease in the popularity of ontologies was the need to hire experts for its construction and the lack of methods for automated construction of ontologies.This article considers the problem of automated ontology generation using articles from the Ukrainian Wikipedia, and geometry was taken as an example of the subject area. A system was built that collects data, analyzes it, and forms an ontology from it.
Naukma Research Papers. Computer Science, Volume 5, pp 45-48; https://doi.org/10.18523/2617-3808.2022.5.45-48

Abstract:
Nowadays enterprise information systems of banks provide modules for calculating creditworthiness of the business. Such systems are complex and it is difficult to maintain and develop them. Moreover, it requires the involvement of large teams. In addition, systems are complicated to change and update in accordance with changes in current legislation. From another point of view, demand for consumer loans is high, and creating a separate module for calculating the creditworthiness of an individual is appropriate in case of increasing the adaptability to changes and updates of the system. Calculating the creditworthiness of an individual is relevant not only for the banking system, but also for other spheres such as logistics and marketing.The work describes the created information system for calculating the creditworthiness of an individual, which calculates the class of the borrower based on data from credit history, credit rating, quality characteristics, financial indicators of the person and characteristics of the credit transaction.The use of the Asp.Net Core platform and the Vue.js framework to build a software module that can be used both independently and easily integrated into other corporate systems is demonstrated.In this work the major steps of designing and developing the system are described.
Naukma Research Papers. Computer Science, Volume 5, pp 62-67; https://doi.org/10.18523/2617-3808.2022.5.62-67

Abstract:
As a part of this work, there was a study of image processing algorithms used in video search systems.With the development of search engines and an increase in the types of queries possible for searching, the need for indexing an increasing amount of diverse information is growing. New data in the form of images and videos require new processing techniques to extract key content descriptions. In video search engines, according to this description, users can find the video files most relevant to the search query. The search query, in turn, can be of various types: text, search by image, search by video file to find a similar one, etc. Therefore, it is necessary to accurately describe the objects in the video in order to assign appropriate labels to the video file in the search engine database.In this article, we focused on the algorithm for extracting key frames of faces from a video sequence, since one of the important objects in the video are people themselves. This algorithm allows you to perform the initial processing of the file and save the identified frames with faces in order to later process this data with the help of the face recognition algorithm and assign the appropriate labels. An alternative application for this algorithm is the current processing of video files to form datasets of faces for the development and training of new computer vision models. The main criteria for such an algorithm were: the accuracy of face detection, the ability to distinguish keyframes of all people from each other, comprehensive evaluation of candidate frames and sorting by the relevance of the entire set for each face.After an analysis of existing solutions for specific stages of the algorithm, the article proposes a sequence of steps for the algorithm for extracting key frames of faces from a video file. An important step is to assess the quality of all candidates and sort them by quality. For this, the work defines various metrics for assessing the quality of the frame, which affect the overall assessment and, accordingly, the sorting order. The article also describes the basic version of the interface for using the proposed algorithm.
, Sergiy Pogorilyy
Naukma Research Papers. Computer Science, Volume 5, pp 54-61; https://doi.org/10.18523/2617-3808.2022.5.54-61

Abstract:
Neural machine translation falls into the category of natural language processing tasks. Despite the availability of a big number of research papers that are devoted to the improvement of the quality of the machine translation of documents, the problem of the translation of the spoken language that contains the elements of the disfluency speech is still an actual task, especially for low-resource languages like the Ukrainian language. In this paper, the problem of the neural machine translation of the transcription results of the spoken language that incorporate different elements of the disfluency speech has been considered in the case of the translation from the English language to the Ukrainian language. Different methods and software libraries for the detection of the elements of disfluency speech in English texts have been analyzed. Due to the lack of open-access corpora of the speech disfluency samples, a new synthetic labeled corpus has been created. The created corpus contains both the original version of a document and its modified version according to the different types of speech disfluency: filler words (uh, ah, etc.) and phrases (you know, I mean), reparandum-repair pairs (cases when a speaker corrects himself during the speech). The experimental verification of the effectiveness of the usage of the method of disfluency speech detection for the improvement of the machine translation of the spoken language has been performed for the pair of English and Ukrainian languages. It has been shown that the current state-of-the-art neural translation models cannot produce the appropriate translation of the elements of speech disfluency, especially, in the reparandum-repair cases. The results obtained may indicate that the mentioned method of disfluency speech detection can be used for the previous processing of the transcriptions of spoken dialogues for the creation of coherent translations by the usage of the different models of neural machine translation.
Naukma Research Papers. Computer Science, Volume 5, pp 26-30; https://doi.org/10.18523/2617-3808.2022.5.26-30

Abstract:
Nowadays the technology of augmented reality has become available for a wide audience of users because of a big number of software and hardware enhancements and optimizations done in the last years. The fact that the smartphone is a suitable and relatively cheap device having all the hardware required makes the technology even more accessible and thus widespread. Furthermore, the interaction with three-dimensional objects in space may have positive impact on user’s perception of information. These both facts make the technology of augmented reality a good choice for displaying complex data.The analysis of software plays a significant role in development as it is vital to keep the code clean and sustained all the time. Poor quality code may be unsustainable to the extent it must be fully replaced which results in big losses of resources. In terms of quality checks the analysis must be informative and consume as few resources as possible to be executed so that it is appropriate to perform it regularly. That is the reason for this process to be automated and made convenient to execute and percept.The new system for automatic software analysis is described in this article. ADAR (Architecture Displayer in Augmented Reality) software is best suitable for code coupling and cohesion analysis as it uses three-dimensional graph to display connectivity between parts of software module. High coupling and low cohesion might inform the developers of severe architectural mistakes that may lead to high code fragility. With the use of AR technology the result of high coupling detection analysis in the form of graph is presented in augmented reality to provide user the information in a highly intuitive way.This article also covers different approaches to graph visualization in three-dimensional space. The criteria that allow to achieve high level of aesthetics relative to this problem are stated in paper. The problem of using the force-directed algorithms in terms of high-aesthetic graph visualization is described in details and some arguments pro their usage are given.
, Mykhailo Kobieliev
Naukma Research Papers. Computer Science, Volume 5, pp 31-40; https://doi.org/10.18523/2617-3808.2022.5.31-40

Abstract:
Finite state machine (FSM) is a powerful tool to model object behavior. Using FSM and its extensions to model program behavior followed by the automatic generation of executable code is the approach encouraged by the model-driven development (MDD) – a software development methodology based on the concepts of model and model transformation.In this paper, a brief overview of FSM-based common methods to model and develop software programs of any nature is given. These methods include David Harel’s statecharts, UML State Machines, Virtual Finite State Machine, etc. Examples of all types of software systems (transformational, interactive, reactive) implemented using FSM are cited.Chat-bots as an example of an interactive software system are considered: concept, classification methods, implementation techniques. A graphical designer of rule-based chat-bots to be integrated in the messenger Telegram is developed and implemented. In this designer, chat-bot behavior is modeled using FSM.Formal method to model a rule-based chat-bot using FSM is provided. The FSM concept is extended by disabled transitions to save history of transition changes made during the FSM design process. A brief overview of code generation methods from FSM specification is done; advantages and disadvantages of the most popular approaches are considered. Dynamic approach to generate code by FSM specification saved in DB is proposed. To implement this approach, document MongoDB and in-memory key-value Redis DB are used; FSM is kept as a JSON-document. This approach is efficient in flexibility, speed and memory needs.Architecture diagram of developed chat-bot graphical designer is given. It has the microservice architecture. The FSM model-to-code transformation is carried out by the bot-execution service written using compiled language Go. Other services include the front-end (UI for end-user, CRUD API for chat-bot) and the bot-management (synchronization of document and key-value databases) services.
Naukma Research Papers. Computer Science, Volume 4, pp 29-43; https://doi.org/10.18523/2617-3808.2021.4.29-43

Abstract:
This paper offers a comprehensive review of selection methods used in the generational genetic algorithms.Firstly, a brief description of the following selection methods is presented: fitness proportionate selection methods including roulette-wheel selection (RWS) and its modifications, stochastic remainder selection with replacement (SRSWR), remainder stochastic independent selection (RSIS), and stochastic universal selection (SUS); ranking selection methods including linear and nonlinear rankings; tournament selection methods including deterministic and stochastic tournaments as well as tournaments with and without replacement; elitist and truncation selection methods; fitness uniform selection scheme (FUSS).Second, basic theoretical statements on selection method properties are given. Particularly, the selection noise, selection pressure, growth rate, reproduction rate, and computational complexity are considered. To illustrate selection method properties, numerous runs of genetic algorithms using the only selection method and no other genetic operator are conducted, and numerical characteristics of analyzed properties are computed. Specifically, to estimate the selection pressure, the takeover time and selection intensity are computed; to estimate the growth rate, the ratio of best individual copies in two consecutive populations is computed; to estimate the selection noise, the algorithm convergence speed is analyzed based on experiments carried out on a specific fitness function assigning the same fitness value to all individuals.Third, the effect of selection methods on the population fitness distribution is investigated. To do this, there are conducted genetic algorithm runs starting with a binomially distributed initial population. It is shown that most selection methods keep the distribution close to the original one providing an increased mean value of the distribution, while others (such as disruptive RWS, exponential ranking, truncation, and FUSS) change the distribution significantly. The obtained results are illustrated with the help of tables and histograms.
Naukma Research Papers. Computer Science, Volume 4, pp 64-71; https://doi.org/10.18523/2617-3808.2021.4.64-71

Abstract:
The paper investigates a possibility of developing a non-virtual hierarchy for a special case of class signature, which may possess different interpretations. The approach is similar to double dispatching in the C ++ programming language. As an alternative to polymorphism, a non-polymorphic hierarchy has been suggested based on generic programming templates. This hierarchy is based on inverse parametrization for templates enabling constructing a general scheme for the design pattern. The pattern defined a class architecture suitable for static implementation of double dispatched multimethod for a special case of signature- defined interfaces.In fact, any abstract base class (interface) with purely virtual operations must acquire a polymorphic implementation. Besides, the polymorphism itself, the dependence of a virtual function on two objects – “this” and another parameter – requires the use of double dispatch, turning a class member function into a double dispatched multimethod.A preliminary consideration deals with issues of double dispatching in the C++ programming language. Inheritance with polymorphic class member functions is used. This requires special efforts of adding to both bases and derived classes a couple of virtual functions to support dispatching. In any case, this approach, besides using virtual functions, has a disadvantage of violating one of the SOLID principles, namely the principle of dependency inversion: base classes should not depend on derivatives, which negatively affects the quality of the software.Polymorphism is usually understood as the dynamic tuning of a program to the data type of the object that the program will encounter during its execution. That is, by its nature, polymorphism is a purely dynamic characteristic. However, in C++ literature and in practice, you can come across the term “static polymorphism”.At the same time, research of possibilities of generalized programming (templates) allows transferring some dynamic problems to the static level. In particular, a variant of static polymorphism application without virtual functions can be considered.A variant of non-virtual double scheduling has been proposed, generalized in the form of a created design pattern “Signature multimethod”. The use of the newly created pattern is illustrated with an example of implementing classes of complex numbers. The absence of violations of SOLID principles is shown, and the possibility of supplementing the hierarchy with new derived classes without the need to interfere with the structure of the base class is demonstrated.The approach suggested in this work has been used in courses in object-oriented programming at the Faculty of Informatics of Kyiv-Mohyla Academy.
Semen Gorokhovskyi,
Naukma Research Papers. Computer Science, Volume 4, pp 48-51; https://doi.org/10.18523/2617-3808.2021.4.48-51

Abstract:
Euclidean algorithm is known by humanity for more than two thousand years. During this period many applications for it were found, covering different disciplines and music is one of those. Such algorithm application in music first appeared in 2005 when researchers found a correlation between world music rhythm and the Euclidean algorithm result, defining Euclidean rhythms as the concept.In the modern world, music could be created using many approaches. The first one being the simple analogue, the analogue signal is just a sound wave that emitted due to vibration of a certain medium, the one that is being recorded onto a computer hard drive or other digital storage called digital and has methods of digital signal processing applied. Having the ability to convert the analogue signal or create and modulate digital sounds creates a lot of possibilities for sound design and production, where sonic characteristics were never accessible because of limitations in sound development by the analogue devices or instruments, nowadays become true. Sound generation process, which usually consists of modulating waveform and frequency and can be influenced by many factors like oscillation, FX pipeline and so on. The programs that influence synthesised or recorded signal called VST plugins and they are utilising the concepts of digital signal processing.This paper aims to research the possible application of Euclidean rhythms and integrate those in the sound generation process by creating a VST plugin that oscillates incoming signal with one of the four basic wave shapes in order to achieve unique sonic qualities. The varying function allows modulation with one out of four basic wave shapes such as sine, triangle, square and sawtooth, depending on the value received from the Euclidean rhythm generator, switching modulating functions introduces subharmonics, with the resulting richer and tighter sound which could be seen on the spectrograms provided in the publication.
Semen Gorokhovskyi,
Naukma Research Papers. Computer Science, Volume 4, pp 98-100; https://doi.org/10.18523/2617-3808.2021.4.98-100

Abstract:
With the rapid development of applications for mobile platforms, developers from around the world already understand the need to impress with new technologies and the creation of such applications, with which the consumer will plunge into the world of virtual or augmented reality. Some of the world’s most popular mobile operating systems, Android and iOS, already have some well-known tools to make it easier to work with the machine learning industry and augmented reality technology. However, it cannot be said that their use has already reached its peak, as these technologies are at the stage of active study and development. Every year the demand for mobile application developers increases, and therefore more questions arise as to how and from which side it is better to approach immersion in augmented reality and machine learning. From a tourist point of view, there are already many applications that, with the help of these technologies, will provide more information simply by pointing the camera at a specific object.Augmented Reality (AR) is a technology that allows you to see the real environment right in front of us with a digital complement superimposed on it. Thanks to Ivan Sutherland’s first display, created in 1968 under the name «Sword of Damocles», paved the way for the development of AR, which is still used today.Augmented reality can be divided into two forms: based on location and based on vision. Location-based reality provides a digital picture to the user when moving through a physical area thanks to a GPS-enabled device. With a story or information, you can learn more details about a particular location. If you use AR based on vision, certain user actions will only be performed when the camera is aimed at the target object.Thanks to advances in technology that are happening every day, easy access to smart devices can be seen as the main engine of AR technology. As the smartphone market continues to grow, consumers have the opportunity to use their devices to interact with all types of digital information. The experience of using a smartphone to combine the real and digital world is becoming more common. The success of AR applications in the last decade has been due to the proliferation and use of smartphones that have the capabilities needed to work with the application itself. If companies want to remain competitive in their field, it is advisable to consider work that will be related to AR.However, analyzing the market, one can see that there are no such applications for future entrants to higher education institutions. This means that anyone can bring a camera to the university building and learn important information. The UniApp application based on the existing Swift and Watson Studio technologies was developed to simplify obtaining information on higher education institutions.
Naukma Research Papers. Computer Science, Volume 4, pp 113-116; https://doi.org/10.18523/2617-3808.2021.4.113-116

Abstract:
A virtual asset is a type of asset which does not have a material representation, although its value is reflected in a real currency. Due to their nature, the price of digital assets is usually highly volatile, especially with futures, which are derivative financial contracts. This is the most important contributing factor to the problem of the low usability of digital-based contracts in enterprise operations.Previously existing virtual assets included photography, logos, illustrations, animations, audiovisual media, etc. However, virtually all of such assets required a third-party platform for exchange to currency. The necessity of having a trusted by both sides mediator greatly limited the ease of use, and ultimately restricted the number of such transactions. Still, popularity of digital assets only grew, as evidenced by an explosive growth of software applications in the 2000s, as well as blockchain-based asset space in the 2010s.The newest and most promising solution developed is based on cryptoassets. Underlying usage of block- chain technology for the transactions checking and storage ensures clarity in virtual assets’ value history. Smart contracts written for the Ethereum platform, as an example, provide a highly trustful way of express- ing predefined conditions of a certain transaction. This allows safe and calculated enterprise usage, and also eliminates the need of having a mutually trusted third-party. The transactions are fully automated and happen at the same time as the pre-defined external conditions are met.Ethereum was chosen as an exemplary platform due to its high flexibility and amount of existing development. Even now, further advancements are being explored by its founder and community. Besides Ether, it is also used nоn-fungible tokens, decentralized finance, and enterprise blockchain solutions. Another important point is how much more nature friendly it is compared to main competitors, due to energy-efficiency of the mining process, enforced by the platform itself. This makes it ideal for responsible usage as well as further research.This article explores the digital assets usage, as well as explains cryptoassets technological background, in order to highlight the recent developments in the area of futures based on virtual assets, using certain Ether implementation as an example, which offers perpetual futures.
, , Yury Yuschenko
Naukma Research Papers. Computer Science, Volume 4, pp 108-112; https://doi.org/10.18523/2617-3808.2021.4.108-112

Abstract:
The work examines the current problems of the spread of use of logical programming in the development of commercial multi-platform software applications, tools for convenient development of a modern graphical interface to the logical programs. Libraries with similar concepts of use have been analyzed and described. The purpose of the proposed concept, which is implemented as an open source library, is described, and the advantages of the proposed tools over similar existing tools are indicated. The main feature and advantage of the proposed concept is the implementation of Prolog business logic and interface by means of JavaScript usage of child processes. The proposed concept of interface to Prolog takes full advantage of the possibilities provided by async await. A framework library has been created for the use of Logic Programming in graphical interface development without losses in the application performance. The paper describes the proposed concept and the developed framework (library). The ways to further improve the possibilities for expanding the purpose of the implemented library were identified. The directions of further simplification for programmers of integration of the graphic interface to logical programs have been defined. A significant advantage of the proposed tool is the easy-to-use functions to wrap and control the correctness of requests to the Prolog. The main goal of the library is to create an environment for the Prolog developers where they can create any type of software, which is meant to be user friendly, fast, and cross platform using modern and flexible. This concept also tries to solve disadvantages and architectural problems that were found in other libraries. The safety of library functionality has been analyzed. The concept of potential horizontal application scalability is described. Conclusions and future of libraries were introduced, in which the usage of TypeScript for type-safety and avoidance of run-time errors is mentioned. Overall, the library extends the use of Prolog beyond logical programming and takes a leap forward in its progress.
Lada Beniukh, Andrii Hlybovets
Naukma Research Papers. Computer Science, Volume 4, pp 88-92; https://doi.org/10.18523/2617-3808.2021.4.88-92

Abstract:
Testing system performance and its importance at the same time is difficult to overestimate or underestimate. It would be much more correct to talk about the timeliness of this activity. Virtually any digital sys- tem built on modern approaches and technologies can work without any critical problems with its own performance. At the same time, for any system, especially when it becomes popular, it is very likely that there will be a time when it will not be able to cope with the ever-increasing load and become unstable. However, most companies that develop and maintain their own digital solutions – from websites to any other digital systems – often focus primarily on the functionality of the system and its compliance, rather than on the performance of the system as a whole. Such intentions are quite natural, because the system must properly perform the functions expected of it. When companies start to face performance problems, they try not to optimize the software as soon as possible, but to add more capacity – vertical and horizontal scaling. This strategy works, but it has limitations. After all, the addition of additional resources cannot be endless and sooner or later rests either on the architecture of the system, or in the capabilities of the company itself, and so on.Therefore it is recommended to carry out stress testing in advance, plan time and resources to have enough time to correct errors, and generally understand the boundaries of the system. At the same time, in order to organize full-fledged stress testing, trained specialists, tools and infrastructure are needed, especially when we are talking about heavy workload.As part of this work, an analysis of various tools for the implementation of stress testing and performance testing, scaling of such tests and centralized reporting of metrics. As a result, approaches and principles for the construction of a modern architecture for the implementation of the load testing subsystem in the continuous supply of code were proposed.
Naukma Research Papers. Computer Science, Volume 4, pp 23-28; https://doi.org/10.18523/2617-3808.2021.4.23-28

Abstract:
Machine learning technologies have developed rapidly in recent years, and people are now able to use them in various spheres of life, making their lives easier and better. The agro-industry is not lagging behind, and every year more and more problems in this area are solved with the help of machine learning algorithms. However, among the problems that have not yet been solved is the problem of identifying diseases of agricultural plants. According to the UN research, about 40% of the world’s harvest dies each year from various diseases, most of which could be avoided through timely intervention and treatment.To solve this problem, we offer an easy, accessible service for everyone, which will allow one to predict by the image of the plant leaves whether it is sick or healthy, or whether it needs any help or intrusion. This service will be indispensable for small farms engaged in growing crops. Thus, it will allow employees of such enterprises to immediately detect diseases and receive recommendations for the care of plants important to them.Therefore, it was decided to develop a neural network architecture that will solve this problem: the prediction of a plant disease by the image of its leaves. This neural network model is lightweight, does not take much time to learn, and has high accuracy on our dataset. It was also investigated which popular architectures (e.g. XceptionNet, DenseNet, etc.) of deep neural networks can have great accuracy in solving this problem. To realize the possibility of using the model by end users, i.e. farmers, it was decided to develop a special web service in the form of a telegram bot. With this bot, anyone can upload images of the leaves of agricultural plants and check whether this plant is healthy or free of any diseases. This bot is also trained to give appropriate advice to gardeners on the treatment of diseases or the proper cultivation of healthy plants.This solution fully solves the problem and has every chance to become an indispensable helper in preserving the world harvest.
Semen Gorokhovskyi, Oleksandra Radziievska
Naukma Research Papers. Computer Science, Volume 4, pp 60-63; https://doi.org/10.18523/2617-3808.2021.4.60-63

Abstract:
In the modern world, it is no longer enough to simply create a product that performs its function, but it should perform it better than thousands of competitors. However, the problem is that human intellectual abilities are limited and many complex tasks are beyond the capabilities of a single person. The natural way of raising our intellectual level is to build teams to share our experience, knowledge, and worldview to create something beyond the capacity of the individual.Thus it is not surprising that according to a recent ranking, collaborative skills are considered most essential in the 21st century [2]. To cope with all challenges and create high-quality products, there should be a team whose members are experts in communication, discussion, problem-solving, and critical thinking. In addition, it is important to manage the team effectively. To do so, it is necessary to know more about the social processes which take place inside a team. Agent-based modeling can be an effective tool to gain such insights.Agent-based modeling is a powerful instrument for simulating different processes, including social ones. This technology was formed under the influence of many other fields such as artificial intelligence, sociology, game theory, and so on.In this article, a model which simulates human interaction in the framework of «Wilderness Survival: A Consensus-Seeking Task» is used to demonstrate the core principle of agent-based modeling. The group of agents complete a test by themselves and afterwards discuss their answers to reach a consensus and achieve the best score.It will be analyzed which human character traits are more important for successful collaborative work. Situations in which some team members are not interested in the team success will be identified. Also, a user interface is provided to enable running custom experiments to better understand how specific character traits impact the team results.
, Serhii Sukharskyi
Naukma Research Papers. Computer Science, Volume 4, pp 10-15; https://doi.org/10.18523/2617-3808.2021.4.10-15

Abstract:
With the development of the Big Data sphere, as well as those fields of study that we can relate to artificial intelligence, the need for fast and efficient computing has become one of the most important tasks nowadays. That is why in the recent decade, graphics processing unit computations have been actively developing to provide an ability for scientists and developers to use thousands of cores GPUs have in order to perform intensive computations. The goal of this research is to implement orthogonal decomposition of a matrix by applying a series of Householder transformations in Java language using JCuda library to conduct a research on its benefits. Several related papers were examined. Malaschonok and Savchenko in their work have introduced an improved version of QR algorithm for this purpose [4] and achieved better results, however Householder algorithm is more promising for GPUs according to another team of researchers – Lahabar and Narayanan [6]. However, they were using Float numbers, while we are using Double, and apart from that we are working on a new BigDecimal type for CUDA. Apart from that, there is still no solution for handling huge matrices where errors in calculations might occur. The algorithm of orthogonal matrix decomposition, which is the first part of SVD algorithm, is researched and implemented in this work. The implementation of matrix bidiagonalization and calculation of orthogonal factors by the Hausholder method in the jCUDA environment on a graphics processor is presented, and the algorithm for the central processor for comparisons is also implemented. Research of the received results where we experimentally measured acceleration of calculations with the use of the graphic processor in comparison with the implementation on the central processor are carried out. We show a speedup up to 53 times compared to CPU implementation on a big matrix size, specifically 2048, and even better results when using more advanced GPUs. At the same time, we still experience bigger errors in calculations while using graphic processing units due to synchronization problems. We compared execution on different platforms (Windows 10 and Arch Linux) and discovered that they are almost the same, taking the computation speed into account. The results have shown that on GPU we can achieve better performance, however there are more implementation difficulties with this approach.
Naukma Research Papers. Computer Science, Volume 4, pp 72-77; https://doi.org/10.18523/2617-3808.2021.4.72-77

Abstract:
When creating a programming language, it is necessary to determine its syntax and semantics. The main task of syntax is to describe all constructions that are elements of the language. For this purpose, a specific syntax highlights syntactically correct sequences of characters of the language alphabet. Most often it is a finite set of rules that generate an infinite set of all construction languages, such as the extended Backus-Naur (BNF) form.To describe the semantics of the language, the preference is given to the abstract syntax, which in real programming languages is shorter and more obvious than specific. The relationship between abstract syntax objects and the syntax of the program in compilers solves the parsing phase.Denotational semantics is used to describe semantics. Initially, it records the denotations of the simplest syntactic objects. Then, with each compound syntactic construction, a semantic function is associated, which by denotations of components of a design calculates its value – denotation. Since the program is a specific syntactic construction, its denotation is possible to determine using the appropriate semantic function. Note that the program itself is not executed when calculating its denotation.The denotative description of a programming language includes the abstract syntax of its constructions, denotations – the meanings of constructions and semantic functions that reflect elements of abstract syntax (language constructions) in their denotations (meanings).The use of the functional programming language Haskell as a metalanguage is considered. The Haskell type system is a good tool for constructing abstract syntax. The various possibilities for describing pure functions, which are often the denotations of programming language constructs, are the basis for the effective use of Haskell to describe denotational semantics.The paper provides a formal specification of a simple imperative programming language with integer data, block structure, and the traditional set of operators: assignment, input, output, loop and conditional. The ability of Haskell to effectively implement parsing, which solves the problem of linking a particular syntax with the abstract, allows to expand the formal specification of the language to its implementation: a pure function — the interpreter.The work contains all the functions and data types that make up the interpreter of a simple imperative programming language.
Semen Gorokhovskyi,
Naukma Research Papers. Computer Science, Volume 4, pp 52-55; https://doi.org/10.18523/2617-3808.2021.4.52-55

Abstract:
Image segmentation is a crucial step in the image processing and analysis process. Image segmentation is the process of splitting one image into many segments. Image segmentation divides images into segments that are more representative and easier to examine. Individual surfaces or items can be used as such pieces. The process of image segmentation is used to locate objects and their boundaries.Genetic algorithms are stochastic search methods, the work of which is taken from the genetic laws, natural selection, and evolution of organisms. Their main attractive feature is the ability to solve complex problems of combinatorial search effectively, because the parallel study of solutions, largely eliminates the possibility of staying on the local optimal solution rather than finding a global one.The point of using genetic algorithms is that each pixel is grouped with other pixels using a distance function based on both local and global already calculated segments. Almost every image segmentation algorithm contains parameters that are used to control the segmentation results; the genetic system can dynamically change parameters to achieve the best performance.Similarly to image sequencing, to optimize several parameters in the process, multi-targeted genetic algorithms were used, which enabled finding a diverse collection of solutions with more variables. Multi- targeted Genetic Algorithm (MTGA) is a guided random search method that consists of optimization techniques. It can solve multi-targeted optimization problems and explore different parts of the solution space. As a result, a diversified collection of solutions can be found, with more variables that can be optimized at the same time. In this article several MTGA were used and compared.Genetic algorithms are a good tool for image processing in the absence of a high-quality labeled data set, which is either a result of the long work of many researchers or the contribution of large sums of money to obtain an array of data from external sources.In this article, we will use genetic algorithms to solve the problem of image segmentation.
Naukma Research Papers. Computer Science, Volume 4, pp 4-9; https://doi.org/10.18523/2617-3808.2021.4.4-9

Abstract:
The paper investigates the issue related to a possible generalization of the “state-probability of choice” model so that the generalized model could be applied to the problem of ranking alternatives, either individual or by a group of agents. It is shown that the results obtained before for the problem of multi-agent choice and decision making by majority of votes can be easily transferred to the problem of multi-agent alternatives ranking. On the basis of distributions of importance values for the problem of ranking alternatives, we can move on to similar models for the choice and voting with the help of well-known exponential normalization of rows.So we regard two types of matrices, both of which belonging to the sort of matrices named balanced rectangular stochastic matrices. For such matrices, sums of elements in each row equal 1, and all columns have equal sums of elements. Both types are involved in a two-level procedure regarded in this paper. Firstly a matrix representing all possible distributions of importance among alternatives should be formed, and secondly a “state-probability of choice” matrix should be obtained on its base. For forming a matrix of states, which belongs and the rows of which correspond to possible distributions of importance, applying pairwise comparisons and the Analytic Hierarchy Method is suggested. Parameterized transitive scales with the parameter affecting the spread of importance between the best and the worst alternatives are regarded. For further getting the matrices of choice probabilities, another parameter which reflects the degree of the agent’s decisiveness is also introduced. The role of both parameters is discussed and illustrated with examples in the paper.The results are reported regarding some numerical experiments which illustrate getting distributions of importance on the basis of the Analytic Hierarchy Process and which are connected to gaining the situation of dynamic equilibrium of alternatives, i.e. the situation when alternatives are considered as those of equal value.
Naukma Research Papers. Computer Science, Volume 4, pp 44-47; https://doi.org/10.18523/2617-3808.2021.4.44-47

Abstract:
A couple of decades ago, data rates on the network were measured in kilobytes per second, and even then, online game developers had some problems with the packet loss and transmission delays. Now the transfer rate is hundreds of times higher, and the problem of delay compensation is even more relevant.For many dynamic online games, a transmission delay of as little as 20 ms can be quite noticeable, negatively affecting the gameplay and emotions of the game, which can repel players.The problem is exacerbated by the fact that along with the need to compensate for the time of delivery of packets, on the client side there are other non-network factors that are beyond the control of developers, which make the total delay 5-10 ms longer. Because of this, the desire to get rid of network delays as much and as well as possible becomes a necessity, and developers are forced to look for optimal ways to solve this problem.The problem statement is as follows: to review the causes of delays in online games and possible solu- tions, as well as the advantages and disadvantages of certain approaches. The problem is considered at the 4 levels of the TCP / IP network model, as well as at the application level. The approaches are given for the most commonly used protocols for each layer, but basic ideas can be easily transferred to other implementations.The main causes of delays under consideration: propagation delay, router queue delay, transmission delay, and processing delays.This article shows the impact of network delays on the online games and the ways to compensate for them, along with the theory of data transmission protocols in the network and the ways to solve the problems that arise in the development of algorithms.Recommendations for solving the compensation problem can be taken into account when designing and launching online shooters, strategies, etc. Thanks to the given receptions it is possible to minimize the general delay on the transfer of packets in a network, thanks to which the game on the client looks as if the player plays in the Single Player mode.
Naukma Research Papers. Computer Science, Volume 4, pp 16-22; https://doi.org/10.18523/2617-3808.2021.4.16-22

Abstract:
SVD (Singular Value Decomposition) algorithm is used in recommendation systems, machine learning, image processing, and in various algorithms for working with matrices which can be very large and Big Data, so, given the peculiarities of this algorithm, it can be performed on a large number of computing threads that have only video cards.CUDA is a parallel computing platform and application programming interface model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit for general purpose processing – an approach termed GPGPU (general-purpose computing on graphics processing units). The GPU provides much higher instruction throughput and memory bandwidth than the CPU within a similar price and power envelope. Many applications leverage these higher capabilities to run faster on the GPU than on the CPU. Other computing devices, like FPGAs, are also very energy efficient, but they offer much less programming flexibility than GPUs.The developed modification uses the CUDA architecture, which is intended for a large number of simultaneous calculations, which allows to quickly process matrices of very large sizes. The algorithm of parallel SVD for a three-diagonal matrix based on the Givents rotation provides a high accuracy of calculations. Also the algorithm has a number of optimizations to work with memory and multiplication algorithms that can significantly reduce the computation time discarding empty iterations.This article proposes an approach that will reduce the computation time and, consequently, resources and costs. The developed algorithm can be used with the help of a simple and convenient API in C ++ and Java, as well as will be improved by using dynamic parallelism or parallelization of multiplication operations. Also the obtained results can be used by other developers for comparison, as all conditions of the research are described in detail, and the code is in free access.
Naukma Research Papers. Computer Science, Volume 4, pp 101-107; https://doi.org/10.18523/2617-3808.2021.4.101-107

Abstract:
Communication networks are complex information systems influenced by a vast amount of factors. It is critically important to forecast the paths that data take to verify the network, check its security and plan its updates. Model allows exploring processes that take place in the network without affecting performance and availability of a real network itself. With modelling it becomes possible to investigate the results of infrastructural changes introduced to the network before actually implementing them. It is important to be able to formally convert real network description into the model definition which preserves all data that is significant for network operation and skip data which is not. Outlining the rules for such conversion and using a limited set of basic functional components provide the ground for automatic model creation for the network of different levels of complexity.Proposed approach to modelling of communication networks is based on decomposition of the overall function of every particular real network component into a set of functions that belong to some predefined basic set. Functions of the basic set include L3 routing, L2 switching, packet filtering, NAT, etc. Model of a real network component is defined as a group of functional nodes each of which implements some function from the basic set.Configuration and current state of network components that influence its operation are also decomposed into elements each of which relates to some particular functional node. Configuration of network components is modelled as a set of configuration storage elements and current state is modelled as a set of current state storage elements.Links that connect real network components and links that connect functional nodes in the model are presented as singledirection channels that implement propagation of L2 frames thus simplifying the model due to excluding physical layer (L1) from the scope.Using the proposed approach to modelling may allow to formalize conversion of a real network descrip- tion to a model thus making automated modelling possible. By using a sufficient basic set of functional nodes it is possible to model the network containing components of any complexity level.
Naukma Research Papers. Computer Science, Volume 4, pp 93-97; https://doi.org/10.18523/2617-3808.2021.4.93-97

Abstract:
Today, mathematics plays a huge part of our everyday life. But due to the poor school education and lack of open access resources, many students find it difficult to be fully prepared for the independent external evaluation in mathematics, especially geometry. Although much has already been done to conduct higher knowledge results, lots of students still have gaps in understanding simple problem solving. Clearly, geometry requires a more fundamental and visual implementation to the studying process than algebra in order to increase the overall knowledge level of Ukrainian applicants for higher education. Students often do not have access to innovative studying instruments in their schools necessary for successful completion of geometry classes, which is why they receive weak results in tests.In the research, we are concentrating on the planimetry problems, because they can be easily produced in a written form. After analyzing all types of describing a problem, the best option for the system is the open-type problems with the short answer.The article concentrates on creating a graphical interface module, implementing it to the existing language processing module, and introducing a recommendation system that demonstrates a new fundamental instrument that can change the learning technique and give a comprehensive way of explaining geometry problems.The created system receives an open-type planimetry problem in Ukrainian language, processes it using the NLP module, and transfers the data directly to the interface module, which creates an image of the problem. Then the student can try to draw all the required figures, while the system continuously checks the progress. Recommendations (hints) can be applied during the process by the system.Interface and the NLP modules were created separately, independently, and using different programming languages. For that purpose, we use an intermediate stage – JSON file, which is used to transfer the processed information.
Yury Yuschenko
Naukma Research Papers. Computer Science, Volume 4, pp 78-87; https://doi.org/10.18523/2617-3808.2021.4.78-87

Abstract:
In the Address Programming Language (1955), the concept of indirect addressing of higher ranks (Pointers) was introduced, which allows the arbitrary connection of the computer’s RAM cells. This connection is based on standard sequences of the cell addresses in RAM and addressing sequences, which is determined by the programmer with indirect addressing. Two types of sequences allow programmers to determine an arbitrary connection of RAM cells with the arbitrary content: data, addresses, subroutines, program labels, etc. Therefore, the formed connections of cells can relate to each other. The result of connecting cells with the arbitrary content and any structure is called tree-shaped formats. Tree-shaped formats allow programmers to combine data into complex data structures that are like abstract data types. For tree-shaped formats, the concept of “review scheme” is defined, which is like the concept of “bypassing” trees. Programmers can define multiple overview diagrams for the one tree-shaped format. Programmers can create tree-shaped formats over the connected cells to define the desired overview schemes for these connected cells. The work gives a modern interpretation of the concept of tree-shaped formats in Address Programming. Tree-shaped formats are based on “stroke-operation” (pointer dereference), which was hardware implemented in the command system of computer “Kyiv”. Group operations of modernization of computer “Kyiv” addresses accelerate the processing of tree-shaped formats and are designed as organized cycles, like those in high-level imperative programming languages. The commands of computer “Kyiv”, due to operations with indirect addressing, have more capabilities than the first high-level programming language – Plankalkül. Machine commands of the computer “Kyiv” allow direct access to the i-th element of the “list” by its serial number in the same way as such access is obtained to the i-th element of the array by its index. Given examples of singly linked lists show the features of tree-shaped formats and their differences from abstract data types. The article opens a new branch of theoretical research, the purpose of which is to analyze the expe- diency of partial inclusion of Address Programming in modern programming languages.
Naukma Research Papers. Computer Science, Volume 4, pp 56-59; https://doi.org/10.18523/2617-3808.2021.4.56-59

Abstract:
Sometimes in practice it is necessary to calculate the probability of an uncertain cause, taking into account some observed evidence. For example, we would like to know the probability of a particular disease when we observe the patient’s symptoms. Such problems are often complex with many interrelated variables. There may be many symptoms and even more potential causes. In practice, it is usually possible to obtain only the inverse conditional probability, the probability of evidence giving the cause, the probability of observing the symptoms if the patient has the disease.Intelligent systems must think about their environment. For example, a robot needs to know about the possible outcomes of its actions, and the system of medical experts needs to know what causes what consequences. Intelligent systems began to use probabilistic methods to deal with the uncertainty of the real world. Instead of building a special system of probabilistic reasoning for each new program, we would like a common framework that would allow probabilistic reasoning in any new program without restoring everything from scratch. This justifies the relevance of the developed genetic algorithm. Bayesian networks, which first appeared in the work of Judas Pearl and his colleagues in the late 1980s, offer just such an independent basis for plausible reasoning.This article presents the genetic algorithm for learning the structure of the Bayesian network that searches the space of the graph, uses mutation and crossover operators. The algorithm can be used as a quick way to learn the structure of a Bayesian network with as few constraints as possible.learn the structure of a Bayesian network with as few constraints as possible.
Naukma Research Papers. Computer Science, Volume 3, pp 149-153; https://doi.org/10.18523/2617-3808.2020.3.149-153

Abstract:
У цій статті розглянуто розв’язання проблеми збирання та управління контактами факультету інформатики НаУКМА. У роботі подано огляд архітектури обраного застосунку та описано кроки впровадження CRM системи Odoo. Крім того, продемонстровано інтегрування з цією системою на прикладі порталу випускників.Матеріал надійшов 10.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 141-148; https://doi.org/10.18523/2617-3808.2020.3.141-148

Abstract:
Цю статтю присвячено мультимедійним системам інформаційного пошуку, зокрема проведено їх порівняння з погляду функціональності. Розглянуто потреби користувача, визначено проблеми у приватних мультимедійних системах інформаційного пошуку. Проаналізовано найважливіші для кінцевого користувача питання: неіснуючі метадані, зв’язок фотографії з авторською особою, зв’язок набору фотографій із заданим типом події. Для кожної проблеми запропоновано підхід щодо вирішення, алгоритми та відповідні інструменти для використання. Також описано участь людини в цьому процесі, якщо повністю автоматизований процес неможливий.Матеріал надійшов 13.05.2020
, Yury Yuschenko
Naukma Research Papers. Computer Science, Volume 3, pp 138-140; https://doi.org/10.18523/2617-3808.2020.3.138-140

Abstract:
Зроблено порівняння переваг і недоліків різних типів клавіатур і запропоновано принципово новий вид клавіатури: віртуальну клавіатуру розпізнавання жестів, що здатна налаштовуватись на вподобання користувачів, які за власним бажанням можуть навчити клавіатуру розпізнавати свої жести як натискання на ту чи іншу кнопку клавіатури.У роботі описано розроблений прототип цієї віртуальної клавіатури й наведено висновки щодо зручності використання таких клавіатур.Запропоновану технологію розпізнавання жестів і налаштування клавіатури може бути застосовано для створення інших пристроїв введення інформації жестами, наприклад, комп’ютерної миші, дігітайзера, джойстика чи будь-якого іншого ігрового контролера.Матеріал надійшов 12.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 132-137; https://doi.org/10.18523/2617-3808.2020.3.132-137

Abstract:
У статті розглянуто актуальність питально-відповідальних систем, сервіси для створення питально-відповідальних систем Dialogflow, IBM Watson Assistant, Microsoft QnA Maker, LUIS. Наведено вимоги для роботи та особливості створення питально-відповідальної системи в кожному сервісі. Робота може бути цікавою для дослідників у галузі питально-відповідальних систем і хмарних сервісів.Матеріал надійшов 09.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 127-131; https://doi.org/10.18523/2617-3808.2020.3.127-131

Abstract:
У статті досліджено задачу багатокритеріальної оптимізації, яку розв’язують методом аналізу ієрархій. Розглянуто ситуацію, коли потрібно порівняти та ранжувати студентські роботи (проекти), приблизно одного рівня якості. Продемонстровано, що в таких випадках класичний метод аналізу ієрархій, запропонований Т. Сааті, призводить до отримання досить «грубих» числових результатів (глобальних пріоритетів), коли виявляється, що робота одного студента суттєво переважає роботу іншого (хоча це не так, бо розглядали роботи приблизно одного рівня). Запропоновано використати альтернативну шкалу попарних порівнянь, яка даватиме змогу отримувати в таких випадках більш адекватні числові результати.Проведено відповідні чисельні розрахунки із використанням авторської програмної системи, результати яких наведено у вигляді скріншотів.Матеріал надійшов 20.05.2020
Naukma Research Papers. Computer Science, Volume 3, pp 121-126; https://doi.org/10.18523/2617-3808.2020.3.121-126

Abstract:
У цій статті розглянуто методи прогнозування рівнів майбутніх продажів і можливості їх використання у сучасних системах планування ресурсів підприємства. На прикладі Dynamics 365 BusinessCentral розглянуто практичне застосування таких методів, у тому числі з допомогою методів машинного навчання.Також під час роботи було досліджено наявне рішення, що базується на аналізі часових рядів (timeseries), і запропоновано доповнення із застосуванням кластерного аналізу (clustering).Матеріал надійшов 10.06.2020
Mykola Glybovets
Naukma Research Papers. Computer Science, Volume 3; https://doi.org/10.18523/2617-3808.2020.3.3

, Serhiy Borozennyi, Mykita Nyverovskyi
Naukma Research Papers. Computer Science, Volume 3, pp 107-113; https://doi.org/10.18523/2617-3808.2020.3.107-113

Abstract:
У роботі розглянуто метод LSA (латентно-семантичного аналізу), зокрема його найпоширеніший варіант, що базується на сингулярному розкладі матриці (SVD). На його основі реалізовано алгоритм кластеризації задач і застосовано на прикладі кластеризації задач із геометрії.Матеріал надійшов 11.06.2020
, Volodymyr Lyashko,
Naukma Research Papers. Computer Science, Volume 3, pp 102-106; https://doi.org/10.18523/2617-3808.2020.3.102-106

Abstract:
Розглянуто використання методу BFGS та його проективного варіанта L-BFGS-B для мінімізації нелінійної функції, яка відповідає знаходженню розв’язків системи п’яти нелінійних рівнянь, серед яких три рівняння є інтегральними та залежать від невідомих параметрів підінтегральних функцій і невідомих верхніх границь для визначених інтегралів. Ця система відповідає задачі побудови S-подібної кривої, яка проходить через дві задані точки із заданими кутами нахилу дотичних у них та забезпечує заданий кут нахилу дотичної у проміжній точці із заданою абсцисою. Показано, що метод BFGS є ефективним, якщо стартова точка вибирається в околі точки мінімуму, де функція, що мінімізується, достатньо точно апроксимується опуклою квадратичною функцією.Матеріал надійшов 15.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 97-101; https://doi.org/10.18523/2617-3808.2020.3.97-101

Abstract:
У статті проаналізовано наявні підходи й алгоритми оброблення зображень та подальшого їх використання для оброблення фото одягу, перетворення їх на фото текстури для 3D-моделей. Програмна частина, описана в цій статті, являє собою бекенд для онлайн-сервісу примірювання одягу на 3D-моделях у браузері, глобальним результатом якої буде тестова версія цього сервісу.Матеріал надійшов 10.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 93-96; https://doi.org/10.18523/2617-3808.2020.3.93-96

Abstract:
У статті проаналізовано наявні підходи та способи відображення 3D-моделей у браузері, накладання текстури на модель, а також методи створення сучасної javascript-бібліотеки, яку можна інтегрувати в будь-який вебзастосунок. Програмна частина, описана в цій роботі, являє собою фонтенд-частини для онлайн-сервісу примірювання одягу на 3D-моделях у браузері, глобальним результатом якої буде тестова версія цього сервісу.Матеріал надійшов 10.06.2020
Maksym Zhuk,
Naukma Research Papers. Computer Science, Volume 3, pp 88-92; https://doi.org/10.18523/2617-3808.2020.3.88-92

Abstract:
У роботі описано ключові аспекти, пов’язані з прикладною розробкою високонавантажених вебмап на основі мапи пошуку нерухомості. Під час розроблення системи важливо розуміти ключові вимоги та адресувати їх в архітектурному рішенні. При проектуванні архітектурного рішення було враховано такі ключові аспекти: геокодування, кластеризація, вибір провайдера мапи, фільтрація. Відображення великої кількості об’єктів є одним із ключових завдань. У результаті запропоновано технічне архітектурне рішення з обґрунтуванням використаних елементів системи, зважаючи на можливі адаптації системи та економічну доцільність.Матеріал надійшов 31.05.2020
, Yury Yuschenko
Naukma Research Papers. Computer Science, Volume 3, pp 83-87; https://doi.org/10.18523/2617-3808.2020.3.83-87

Abstract:
У роботі розглянуто багатовимірне адресне сортування, зокрема декілька методів його реалізації. Описано декілька структур даних для збереження та використання результатів багатовимірного адресного сортування. На прикладі реалізованого програмного проекту продемонстровано корисність і доцільність використання багатовимірного адресного сортування для розв’язання задач класифікації сукупностей згрупованих даних. Визначено переваги використання багатовимірного адресного сортування при розв’язанні задач кластеризації порівняно з методами, які нині набули широкого використання.Матеріал надійшов 10.06.2020
Naukma Research Papers. Computer Science, Volume 3, pp 75-82; https://doi.org/10.18523/2617-3808.2020.3.75-82

Abstract:
Точне виявлення тіні на зображенні є складним завданням, оскільки досить важко зрозуміти, чи затемнення або сірий колір є причиною тіні. У цій статті запропоновано метод видалення тіней на зображенні з використанням генеративних змагальних нейронних мереж. Навчання мережі відбувається без нагляду, тобто не залежить від трудомісткого збирання даних і маркування даних. Метод видалення тіней на зображенні базується на методі непідконтрольного передання зображень між різними доменами. Було використано дві мережі: першу – для додавання тіней у зображення, а другу – для видалення тіней. Набір даних ISTD використовували для чіткості оцінювання, оскільки він містить основні зображення, що не мають тіні, а також тіньові маски.Матеріал надійшов 10.06.2020
Kyrylo Gorokhovskyi, , Oleh Franchuk
Naukma Research Papers. Computer Science, Volume 3, pp 69-74; https://doi.org/10.18523/2617-3808.2020.3.69-74

Abstract:
У статті наведено визначення розподілених систем і розглянуто Monolithic, Microservice та Serverless архітектури. Описано процес технічного аудиту та уточнено аспекти системи, які потрібно враховувати під час аудиту. Розглянуто атрибути якості. Наведено контрольні списки для аудиту, основані на найкращих практиках у галузі, що допомагає підготуватися до технічного аудиту.Матеріал надійшов 10.06.2020
Page of 2
Articles per Page
by
Show export options
  Select all
Back to Top Top