Refine Search

New Search

Results: 588

(searched for: Use of Artificial Intelligence )
Save to Scifeed
Page of 12
Articles per Page
by
Show export options
  Select all
Ilya ILYA BARANComputer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar Street, Cambridge, MA 02139, USAERIK D. DEMAINEComputer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technolo, Erik D. Demaine, Peyman Afshani, Jérémy Barbay, Timothy M. Chan
International Journal of Computational Geometry & Applications, Volume 15, pp 327-350; https://doi.org/10.1142/s0218195905001737

Abstract:
We consider a general model for representing and manipulating parametric curves, in which a curve is specified by a black box mapping a parameter value between 0 and 1 to a point in Euclidean d-space. In this model, we consider the nearest-point-on-curve and farthest-point-on-curve problems: given a curve C and a point p, find a point on C nearest to p or farthest from p. In the general black-box model, no algorithm can solve these problems. Assuming a known bound on the speed of the curve (a Lipschitz condition), the answer can be estimated up to an additive error of ε using O(1/ε) samples, and this bound is tight in the worst case. However, many instances can be solved with substantially fewer samples, and we give algorithms that adapt to the inherent difficulty of the particular instance, up to a logarithmic factor. More precisely, if OPT (C,p,ε) is the minimum number of samples of C that every correct algorithm must perform to achieve tolerance ε, then our algorithm performs O( OPT (C,p,ε) log (ε-1/ OPT (C,p,ε))) samples. Furthermore, any algorithm requires Ω(k log (ε-1/k)) samples for some instance C′ with OPT (C′,p,ε) = k; except that, for the nearest-point-on-curve problem when the distance between C and p is less than ε, OPT is 1 but the upper and lower bounds on the number of samples are both Θ(1/ε). When bounds on relative error are desired, we give algorithms that perform O( OPT · log (2+(1+ε-1) · m-1/ OPT )) samples (where m is the exact minimum or maximum distance from p to C) and prove that Ω( OPT · log (1/ε)) samples are necessary on some problem instances.
Peijun Yuan, Ruichen Hu, Xue Zhang, , , State Key Laboratory Of Brain, Cognitive Science, CAS Center for Excellence in Brain Science, Intelligence Technology, Institute of Psychology, et al.
Published: 4 June 2021
Journal: Elife
Abstract:
Temporal regularity is ubiquitous and essential to guiding attention and coordinating behavior within a dynamic environment. Previous researchers have modeled attention as an internal rhythm that may entrain to first-order regularity from rhythmic events to prioritize information selection at specific time points. Using the attentional blink paradigm, here we show that higher-order regularity based on rhythmic organization of contextual features (pitch, color, or motion) may serve as a temporal frame to recompose the dynamic profile of visual temporal attention. Critically, such attentional reframing effect is well predicted by cortical entrainment to the higher-order contextual structure at the delta band as well as its coupling with the stimulus-driven alpha power. These results suggest that the human brain involuntarily exploits multiscale regularities in rhythmic contexts to recompose dynamic attending in visual perception, and highlight neural entrainment as a central mechanism for optimizing our conscious experience of the world in the time dimension.
International Journal of Pattern Recognition and Artificial Intelligence, Volume 21, pp 207-224; https://doi.org/10.1142/s0218001407005429

Abstract:
Personalization is the ability to retrieve information content related to users' profile and facilitate their information-seeking activities. Several environments, such as the Web, take advantage of personalization techniques because of the large amount of available information. For this reason, there is a growing interest in providing automated personalization processes during the human-computer interaction.In this paper we introduce a new approach for user modeling, which grounds in the Search of Associative Memory (SAM) theory. By means of implicit feedback techniques, the approach is able to unobtrusively recognize user needs and monitor the user working context in order to provide important information useful to personalize traditional search tools and implement recommender systems. Experimental results based on precision and recall measures indicate improvements in comparison with traditional user models.
Stephan Stephan Grimm Dr. Stephan Grimm (born 1975) is a Senior Key Expert Research Scientist at Siemens Technology in Munich. He studied Computer Science in Karlsruhe, Germany, where he also did his PhD in the area of Symbolic AI and logics-based Knowledge Repre, Alois Alois Haselböck Dr. Alois Haselböck is a Senior Key Expert in the research group Configuration Technologies in the Technology department at Siemens AG Österreich. He received a MS and a PhD degree in Computer Science from the Vienna University of Technolo, , Kevin Kevin Perry Kevin Perry (born 1986) is a Key Development Expert at Siemens Digital Industries, Factory Automation. He works in the Innovation department of ETM professional control GmbH with a focus on HMI and SCADA software. His primary responsibilities , Jörn Jörn Peschke Jörn Peschke (born 1970) is a Principal Key Expert at Siemens Digital Industries, Factory Automation and works in the department “Automation goes IT”. He leads predevelopment activities in the areas of agile manufacturing and engineering of a, Oliver Oliver Scharm Oliver Scharm (born 1995) holds a Master's degree in Mechanical Engineering and Management from Friedrich Alexander University in Erlangen. Today, he works as a project manager and consultant for digital enterprise projects. His focus is on , Jens Jens Schnittger Dr. Jens Schnittger (born 1962) is a Principal Key Expert at Siemens Digital Industries, Factory Automation and works in the department Autonomous Factory and Industrial Artificial Intelligence. This department carries out predevelopment p
Published: 1 February 2023
At – Automatisierungstechnik, Volume 71, pp 151-163; https://doi.org/10.1515/auto-2022-0111

Abstract:
The necessity for increased flexibility in production and handling of high-variant product families is a strong trend in Industry 4.0 scenarios. A promising approach to achieve this is to replace the rigid programming of the manufacturing management systems by a declarative description of machines and their functionality. This allows for a dynamic allocation of tasks to resources. This paper reports the ongoing work on applying the concept of capabilities and skills to an industrial application example in order to investigate its potential benefits. To apply the abstract concept of capabilities and skills to the production scenario it is implemented using specific technologies. These include Semantic Web ontologies, constraint solving methods and OPC UA for skill invocation and communication.
, Thomas Thomas Barth Thomas Barth is a researcher at SmartFactory Kaiserslautern. His research focuses on the development and implementation of flexible production modules based on PLC systems as well as the skill-based control of industrial robots using ROS2., Jonathan Jonathan Nußbaum Jonathan Nußbaum is a researcher at the Chair of Machine Tools and Control Systems at the Technical University of Kaiserslautern. His current research focus is the coupling and controlling of individual production modules using a skill ba, Jesko Jesko Hermann Jesko Hermann is research group leader at SmartFactory Kaiserslautern. His research focus is the dynamic generation of supply chains in production networks based on services, capabilites and skills., Martin Martin Ruskowski Martin Ruskowski is Head of the Innovative Factory Systems research department at the German Research Center for Artificial Intelligence (DFKI) and is Chair of the “Department of Machine Tools and Control Systems” at the Technical Univers
Published: 1 February 2023
At – Automatisierungstechnik, Volume 71, pp 163-175; https://doi.org/10.1515/auto-2022-0115

Abstract:
Cyber-Physical Production Modules (CPPMs) must be described by vendor-independent and machine-readable standardized information models. Standards make CPPMs adaptable and interchangeable at different company levels to enable flexible production. We present an OPC UA information model for CPPMs based on the relevant OPC UA Companion Specifications. Combined with the skill concept, a transport system is controlled in an order-driven production. Additionally, we link different state machines to facilitate utilization functions for mission distribution between transport units.
Pikalov V.A., Institute of artificial intelligence problems of MES and NAS of Ukraine, Klymenko M.S.
Artificial Intelligence, Volume 25, pp 65-71; https://doi.org/10.15407/jai2020.01.065

Abstract:
This article proposes using structural description for graphical objects to solve an urgent task of trajectory analysis. A range of modern trajectory analysis approaches were analyzed and the best that is based on Graph Convolutional Neural Networks and Suffix Tree Clustering algorithm was chosen. Descripted ways to reduce computational sources for this neural network approach. This neural network was adapted to analyze structural description and advantages of this approach are shown.
, Jesús R. CARLOS D. BARRANCODivision of Computer Science – School of Engineering, Pablo de Olavide University, Utrera Rd. Km. 1 41013 Sevilla, SpainJESÚS R. CAMPAÑADepartment of Computer Science and Artificial Intelligence, University of Granada, Daniel Saucedo Ara, , Olga Pons, Sergio Jaime-Castillo, Esther Jiménez, Sven Helmer
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Volume 17, pp 1-23; https://doi.org/10.1142/s0218488509006005

Abstract:
This paper proposes an indexing procedure for improving the performance of query processing on a fuzzy database. It focuses on the case when a necessity-measured atomic flexible condition is imposed on the values of a fuzzy numerical attribute. The proposal is to apply a classical indexing structure for numerical crisp data, a B +-tree combined with a Hilbert curve. The use of such a common indexing technique makes its incorporation into current systems straightforward. The efficiency of the proposal is compared with that of another indexing procedure for similar fuzzy data and flexible query types. Experimental results reveal that the performance of the proposed method is similar and more stable than that of its competitor.
, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute'', , , World Data Center (WDC) for Geoinformatics and Sustainable Development, Institute for Information Recording of the National Academy of Sciences of Ukraine, Institute of Special Communications and Information Protection of the National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute'', Institute of Artificial Intelligence Problems under MES of Ukraine and NAS of Ukraine
Artificial Intelligence, Volume 27, pp 260-268; https://doi.org/10.15407/jai2022.01.260

Abstract:
This paper considers the use of modern intelligent technologies in information retrieval systems. A general scheme for the implementation of Internet search engines is presented. The existing and prospective approaches to the intellectualization of individual components of this scheme are presented. An approach to the creation of a system of intelligent agents for information collection is presented. These agents are combined into teams and exchange the results of their work with each other. They form a reliable basis for the information base of search engines, ensure uninterrupted operation of the system in case of failure of individual agents. Methods for the formation of semantic networks corresponding to the texts of individual documents are also considered. These networks are considered as search patterns of documents for information retrieval and detection of duplicates or similar documents. Machine learning methods are used to conduct sentiment analysis. The paper describes an approach that made it possible to make the transition from the use of a naive Bayesian model to a modern machine learning system. The issues of cluster analysis and visualization of search results are also considered.
Shevchenko A, Institute of Artificial Intelligence Problems of the Ministry of Education and Science of Ukraine and the National Academy of Sciences of Ukraine, Sosnitsky A, Berdyansk State Pedagogical University
Artificial Intelligence, Volume 26, pp 10-20; https://doi.org/10.15407/jai2021.01.010

Abstract:
The main problem today in the research and development of AI is the lack of a scientific definition of Intelligence, since it is impossible to do something incomprehensible. This fundamentally delegitimizes all developments in this area and science as a whole as a product of exclusively intellectual activity, and any scientific use of the term «Intelligence» in its strict sense is unreasonable. In this paper, this problem is solved by transition to a more general universal paradigm of cognition, which allowed us to deduce the desired definition and universal formalism of Intelligence in its strong sense. Unlike previous publications, the ontology and properties of Intelligence are specified here as necessary components of Intelligence, which are subject to subsequent concretization and materialization in different niches of existence. The results of the work are of both fundamental and applied general scientific importance for all technical and humanitarian applications of Intelligence
Ashursky E, Institute of Artificial Intelligence Problems of the Ministry of Education and Science of Ukraine and the NAS Ukraine
Artificial Intelligence, Volume 26, pp 111-119; https://doi.org/10.15407/jai2021.02.111

Abstract:
To date the recognition of universal, a priori inherent in them connection between the objects of the world around us is quite rightly considered almost an accomplished fact. But on what laws do these or those sometimes rather variegated systems function in live and inert nature (including - in modern computer clusters)? Where are the origins of their self-organization activity lurked: whether at the level of still hypothetical quantum-molecular models, finite bio-automata or hugely fashionable now artificial neural networks? Answers to all these questions if perhaps will ever appear then certainly not soon. That is why the bold innovative developments presented in following article are capable in something, possibly, even to refresh the database of informatics so familiar to many of us. And moreover, in principle, the pivotal idea developed here, frankly speaking, is quite simple in itself: if, for example, the laws of the universe are one, then all the characteristic differences between any evolving objects should be determined by their outwardly-hidden informative (or, according to author’s terminology - “mental") rationale. By the way, these are not at all empty words, as it might seem at first glance, because they are fully, where possible, supported with the generally accepted physical & mathematical foundation here. So as a result, the reader by himself comes sooner or later to the inevitable conclusion, to wit: only the smallest electron-neutrino ensembles contain everything the most valuable and meaningful for any natural system! At that even no matter, what namely global outlook paradigm we here hold
, Mohamed Abdalla, Salwa Abdalla, Mohamed Saad, , , Harvard Medical School, United States; Department of Statistics, University of Oxford, United Kingdom; Department of Computer Science, et al.
Published: 7 July 2022
Journal: Elife
Abstract:
Analysis of the content of medical journals enables us to frame the shifting scientific, material, ethical, and epistemic underpinnings of medicine over time, including today. Leveraging a dataset comprised of nearly half-a-million articles published in the Journal of the American Medical Association (JAMA) and the New England Journal of Medicine (NEJM) over the past 200 years, we (a) highlight the evolution of medical language, and its manifestations in shifts of usage and meaning, (b) examine traces of the medical profession’s changing self-identity over time, reflected in its shifting ethical and epistemic underpinnings, (c) analyze medicine’s material underpinnings and how we describe where medicine is practiced, (d) demonstrate how the occurrence of specific disease terms within the journals reflects the changing burden of disease itself over time and the interests and perspectives of authors and editors, and (e) showcase how this dataset can allow us to explore the evolution of modern medical ideas and further our understanding of how modern disease concepts came to be, and of the retained legacies of prior embedded values.
, The Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, The Problem Artificial Intelligence Institute, Federal State Autonomous Educational Institution of Higher Education "Russian National Research Medical University named after N. I. Pirogov" of the Ministry of Health of Russian Federation
Methodology and Technology of Continuing Professional Education pp 21-37; https://doi.org/10.24075/mtcpe.2020.022

Abstract:
The problem of physicians’ continuous professional development may be solved in different ways. The purpose of this article is to demonstrate the use of computerized clinical decision support systems for resolving the considered problem. It is shown that intelligent systems possess the ability to obtain new knowledge, in contrast to computing systems based on data processing. We have come to this conclusion by explaining the sequence of hypothesis generation and analysis, as well as by explaining the proposed solution. In addition, there are intelligent systems that are focused on dialogue with a physician directly to improve his qualifications in a specific subject area. Thus, in the field of continuous additional professional development, we can use both special educational intellectual programs and decision support systems, including explanation modules. In the future, stand-alone diagnostic and other systems will be used only in exceptional cases. The modern trend is focused on their integration into electronic medical records systems. In this case, such systems will be used in the framework of preventive examinations or treatment-and-diagnostic process.
S. Alonso, , , , S. ALONSODepartment of Software Engineering, University of Granada, Granada 18071, SpainE. HERRERA-VIEDMADepartment of Computer Science and Artificial Intelligence, University of Granada, Granada 18071, SpainF. CHICLANACentre for Computational Intelligenc, Huimin Zhang, Zhiming Zhang, Carmen De Maio, Aurelio Tommasetti, Orlando Troisi, et al.
International Journal of Information Technology & Decision Making, Volume 08, pp 313-333; https://doi.org/10.1142/s0219622009003417

Abstract:
Multi-person decision making problems involve the preferences of some experts about a set of alternatives in order to find the best one. However, sometimes experts might not possess a precise or sufficient level of knowledge of part of the problem and as a consequence that expert might not give all the information that is required. Indeed, this may be the case when the number of alternatives is high and experts are using fuzzy preference relations to represent their preferences. In the literature, incomplete information situations have been studied, and as a result, procedures that are able to compute the missing information of a preference relation have been designed. However, these approaches usually need at least a piece of information about every alternative in the problem in order to be successful in estimating all the missing preference values. In this paper, we address situations in which an expert does not provide any information about a particular alternative, which we call situations of total ignorance. We analyze several strategies to deal with these situations. We classify these strategies into: (i) individual strategies that can be applied to each individual preference relation without taking into account any information from the rest of experts and (ii) social strategies, that is, strategies that make use of the information available from the group of experts. Both individual and social strategies use extra assumptions or knowledge, which could not be directly instantiated in the experts preference relations. We also provide an analysis of the advantages and disadvantages of each one of the strategies presented, and the situations where some of them may be more adequate to be applied than the others.
, Shashank Srikant, Yotaro Sueoka, Hope H Kean, Riva Dhamala, Una-May O'Reilly, , , Department of Brain, Cognitive Sciences, et al.
Published: 15 December 2020
Journal: Elife
Abstract:
Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two candidate brain systems: the multiple demand (MD) system, typically recruited during math, logic, problem solving, and executive tasks, and the language system, typically recruited during linguistic processing. We examined MD and language system responses to code written in Python, a text-based programming language (Experiment 1) and in ScratchJr, a graphical programming language (Experiment 2); for both, we contrasted responses to code problems with responses to content-matched sentence problems. We found that the MD system exhibited strong bilateral responses to code in both experiments, whereas the language system responded strongly to sentence problems, but weakly or not at all to code problems. Thus, the MD system supports the use of novel cognitive tools even when the input is structurally similar to natural language.
, , , Xi Wei, Sheng Zhang, Ruifang Zhang, Ying Gu, Xia Chen, Liying Shi, Xiaomao Luo, et al.
Published: 27 August 2020
Journal: Endocrine
Endocrine, Volume 72, pp 157-170; https://doi.org/10.1007/s12020-020-02442-x

The publisher has not yet granted permission to display this abstract.
, , Xi Wei, Sheng Zhang, Yanyan Song, Baoming Luo, Jianchu Li, Linxue Qian, Ligang Cui, Wen Chen, et al.
Published: 21 August 2020
Journal: Endocrine
Endocrine, Volume 70, pp 256-279; https://doi.org/10.1007/s12020-020-02441-y

The publisher has not yet granted permission to display this abstract.
Cheng Luo, , Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering, Instrument Sciences, Zhejiang University, China; Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, China
Published: 21 December 2020
Journal: Elife
Abstract:
Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.
Sayora Ibragimova, Research Institute for the development of digital technologies and artificial intelligence of Uzbekistan
Published: 30 March 2021
Technical Sciences, Volume 4, pp 37-41; https://doi.org/10.26739/2181-9696-2021-3-6

Abstract:
This work deals with basic theory of wavelet transform and multi-scale analysis of speech signals, briefly reviewed the main differences between wavelet transform and Fourier transform in the analysis of speech signals. The possibilities to use the method of wavelet analysis to speech recognition systems and its main advantages. In most existing systems of recognition and analysis of speech sound considered as a stream of vectors whose elements are some frequency response. Therefore, the speech processing in real time using sequential algorithms requires computing resources with high performance. Examples of how this method can be used when processing speech signals and build standards for systems of recognition.Key words: digital signal processing, Fourier transform, wavelet analysis, speech signal, wavelet transform
, Zheng Dai, David K Gifford, , Koch Institute for Integrative Cancer Research, United States; Department of Biological Engineering, Massachusetts Institute of Technology, United States; Computer Science, Artificial Intelligence Laboratory, Computer Science, et al.
Published: 4 July 2022
Journal: Elife
Abstract:
T cells play a critical role in the adaptive immune response, recognizing peptide antigens presented on the cell surface by major histocompatibility complex (MHC) proteins. While assessing peptides for MHC binding is an important component of probing these interactions, traditional assays for testing peptides of interest for MHC binding are limited in throughput. Here, we present a yeast display-based platform for assessing the binding of tens of thousands of user-defined peptides in a high-throughput manner. We apply this approach to assess a tiled library covering the SARS-CoV-2 proteome and four dengue virus serotypes for binding to human class II MHCs, including HLA-DR401, -DR402, and -DR404. While the peptide datasets show broad agreement with previously described MHC-binding motifs, they additionally reveal experimentally validated computational false positives and false negatives. We therefore present this approach as able to complement current experimental datasets and computational predictions. Further, our yeast display approach underlines design considerations for epitope identification experiments and serves as a framework for examining relationships between viral conservation and MHC binding, which can be used to identify potentially high-interest peptide binders from viral proteins. These results demonstrate the utility of our approach to determine peptide-MHC binding interactions in a manner that can supplement and potentially enhance current algorithm-based approaches.
, Mari State University, Olesya G. Popova, Toktarova Vera I. Доктор Педагогических Наук, Professor of the Department of Applied Mathematics, Computer Science, Yoshkar- Ola, Russia Popova Olesya G. Artificial Intelligence Center Employee, Russia
Siberian Pedagogical Journal pp 61-71; https://doi.org/10.15293/1813-4718.2301.06

Abstract:
Today, the analysis of educational data is a rapidly developing area that contributes to improving the quality and efficiency of student learning in e-learning systems and environments. Visual analytics methods are the best means for reviewing and presenting educational data in a convenient and informative form for perception. The purpose of the article is the analysis of educational data using visualization methods to identify patterns in the educational activities of students. Methodology. The methodological basis of the study is a complex of theoretical, empirical and mathematical methods. The paper provides a visualization of educational data based on the electronic course “Fundamentals of Programming”, hosted in the electronic educational environment of the university (based on LMS Moodle). The data of 118 students who completed the course with different academic performance were considered. Research results. The paper substantiates the relevance of using visualization for the analysis of educational data. Methods of analysis are considered, areas of their application are given. An analysis of the works of domestic researchers is given. The educational data describing the study of theoretical material by students, the performance of practical and test tasks, the time spent in the e-learning system are analyzed. Based on the method of identifying relationships, the dependences and patterns of students’ activities in the study of the course are visually displayed. In conclusion, it is concluded that the behavior of students in the electronic course and their marks for practical and test tasks are interrelated. Regularities in the distribution of estimates are revealed. Visualization made it possible to present data in a visual and informative form for perception. The proposed approach can be useful in the analysis of a student’s digital footprint and building his digital profile.
Shevchenko A.I., Institute for Artificial Intelligence Problems of MES and NAS of Ukraine, Sosnitsky A.V., Berdyansk State Pedagogical University
Artificial Intelligence, Volume 24, pp 27-38; https://doi.org/10.15407/jai2019.03-04.027

Abstract:
The main problem today in the research and development of AI is the lack of a scientific definition of Intelligence, since it is impossible to do something incomprehensible. This fundamentally delegitimizes all developments in this area and science as a whole as a product of exclusively intellectual activity, and any scientific use of the term "Intellect" in its strict sense is unreasonable. In this paper, this problem is solved by transition to a more general universal paradigm of cognition, which allowed us to deduce the desired definition and universal formalism of Intelligence in its strong sense. Unlike previous publications, the ontology and properties of Intelligence are specified here as necessary components of Intelligence, which are subject to subsequent concretization and materialization in different niches of existence. The results of the work are of both fundamental and applied general scientific importance for all technical and humanitarian applications of Intelligence.
Mark Mark Nitzberga Center for Human-Compatible Artificial Intelligence (CHAI), University of California Berkeley, Berkeley, CA, USA;b Berkeley Roundtable on the International Economy (BRIE), University of California Berkeley, Berkeley, CA, USAView further aut,
Published: 21 July 2022
Journal of European Public Policy, Volume 29, pp 1753-1778; https://doi.org/10.1080/13501763.2022.2096668

Abstract:
Artificial intelligence (AI) poses a set of interwoven challenges. A new general purpose technology likened to steam power or electricity, AI must first be clearly defined before considering its global governance. In this context, a useful definition is technology that uses advanced computation to perform at human cognitive capacity in some task area. Like electricity, AI cannot be governed in isolation, but in the context of a broader digital technology toolbox. Establishing national and community priorities on how to reap AI’s benefits, while managing its social and economic risks, will be an evolving debate. A fundamental driver of the development and deployment of AI tools, of the algorithms and data, are the dominant Digital Platform Firms (DPFs). Unless specifically regulated, DPF's set de facto rules for use of data and algorithms. That can shift the borderline between public and private, and result in priorities that differ from those of the public sector or civil society. Governance of AI and the toolbox is a critical component of national success in the coming decades, as governments recognize opportunities and geopolitical risks posed by the suite of technologies. However, AI pries open a Pandora's box of questions that sweep across the economy and society engaging diverse communities. Rather than strive towards global agreement on a single set of market and social rules, one must consider how to pursue objectives of interoperability amongst nations with quite different political economies. Even such limited agreements are complicated following the Russian invasion of Ukraine.
Nita G. Valikodath, Emily Cole, Daniel S. W. Ting, J. Peter Campbell, Louis R. Pasquale, Michael F. Chiang, , On Behalf Of The American Academy Of Ophthalmology Task Force On Artificial Intelligence
Translational Vision Science & Technology, Volume 10, pp 14-14; https://doi.org/10.1167/tvst.10.7.14

Abstract:
Clinical care in ophthalmology is rapidly evolving as artificial intelligence (AI) algorithms are being developed. The medical community and national and federal regulatory bodies are recognizing the importance of adapting to AI. However, there is a gap in physicians’ understanding of AI and its implications regarding its potential use in clinical care, and there are limited resources and established programs focused on AI and medical education in ophthalmology. Physicians are essential in the application of AI in a clinical context. An AI curriculum in ophthalmology can help provide physicians with a fund of knowledge and skills to integrate AI into their practice. In this paper, we provide general recommendations for an AI curriculum for medical students, residents, and fellows in ophthalmology.
Ravshanov N. K., Research institute for development of digital technologies and artificial intelligence, Kholmatova I. I.
Published: 12 May 2022
Abstract:
An algorithm is presented in this article for the numerical study of the problem of filtration of compressible fluids and for solving the problem posed using the component splitting method and the data-driven sweep. The developed algorithm was first applied to solve a model problem in a rectangular domain with sources located symmetrically at the four edges and the center of the domain. Due to the symmetry of the results obtained, the result obtained for the quarter of the domain is presented. The developed algorithm is also used to solve a problem set in an arbitrary area and sources arbitrarily located in it. The solution results are shown in the form of graphs.
Kyung Don Yoo, Junhyug Noh, Wonho Bae, Jung Nam An, Hyung Jung Oh, Harin Rhee, Eun Young Seong, Seon Ha Baek, Shin Young Ahn, Jang-Hee Cho, et al.
Published: 21 March 2023
Scientific Reports, Volume 13, pp 1-12; https://doi.org/10.1038/s41598-023-30074-4

Abstract:
Fluid balance is a critical prognostic factor for patients with severe acute kidney injury (AKI) requiring continuous renal replacement therapy (CRRT). This study evaluated whether repeated fluid balance monitoring could improve prognosis in this clinical population. This was a multicenter retrospective study that included 784 patients (mean age, 67.8 years; males, 66.4%) with severe AKI requiring CRRT during 2017–2019 who were treated in eight tertiary hospitals in Korea. Sequential changes in total body water were compared between patients who died (event group) and those who survived (control group) using mixed-effects linear regression analyses. The performance of various machine learning methods, including recurrent neural networks, was compared to that of existing prognostic clinical scores. After adjusting for confounding factors, a marginal benefit of fluid balance was identified for the control group compared to that for the event group (p = 0.074). The deep-learning model using a recurrent neural network with an autoencoder and including fluid balance monitoring provided the best differentiation between the groups (area under the curve, 0.793) compared to 0.604 and 0.606 for SOFA and APACHE II scores, respectively. Our prognostic, deep-learning model underlines the importance of fluid balance monitoring for prognosis assessment among patients receiving CRRT.
Giscard Franceire Cintra Veloso, Federal University of Itajubá- Artificial Intelligence Applications Group- Brasil, ,
Renewable Energy and Power Quality Journal, Volume 1, pp 596-600; https://doi.org/10.24084/repqj05.344

Юлия Юрьевна Красноперова, Ulyanovsk State Pedagogical University named after I.N. Ulyanov, Севиндж Шахмирзаевна Мехманова, Ekaterina A. Khusnutdinova, Artificial Intelligence Center Limited Liability Company, Scientific Center For Examination Of Medical Products
Вестник Пермского университета. Серия «Биология»=Bulletin of Perm University. Biology pp 288-293; https://doi.org/10.17072/1994-9952-2022-4-288-293

Abstract:
The identification of polyvirulent strains of Lactobacillus spp. bacteria with associative interaction with protozoa Blastocystis hominis in vivo was carried out. Lactobacillus spp. and B. hominis strains were obtained from the feces of 396 patients undergoing examination for intestinal dysbiosis. Identification was carried out using microscopic, bacteriological and parasitological methods. 112 reference strains of lactobacilli NK1 and K3SH24 deposited at the Institute of Genetics and Breeding of Industrial Microorganisms were used as control. The degree of virulence of the simplest blastocysts was determined by intraperitoneal administration to white mice (weighing 17.4+1.5 g) 0.5 ml of a culture suspension of the studied microorganisms grown on Suresh medium. Primers to several genes determining the ability to form type 1 fimbriae, type S and P fimbriae, bacterial adhesin intimin and hemolysin were used in the investigation. Testing of 396 Lactobacillus spp. strains isolated from microsymbiocenoses with Blastocystis hominis of varying degrees of virulence showed that the dynamics of detection of pathogenicity genes in lactobacilli increased with an increase in the degree of virulence of protozoa. Most often, in the general pool of lactobacillus strains, the desired amplicons were detected using primers to the fimA gene (up to 67.9%). During associative interaction with moderately and highly virulent blastocysts, an increase in the heterogeneity of the lactobacillus population was revealed, manifested by an increase in the frequency of detection of all studied genetic determinants of pathogenicity, compared with Lactobacillus spp. strains isolated from associations with avirulent B. hominis and in the control group.
for the Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group, Jacob L. Jaremko, Marleine Azar, Rebecca Bromwich, Andrea Lum, Li Hsia Alicia Cheong, Martin Gibert, François LaViolette, Bruce Gray, Caroline Reinhold, et al.
Canadian Association of Radiologists Journal, Volume 70, pp 107-118; https://doi.org/10.1016/j.carj.2019.03.001

Abstract:
Artificial intelligence (AI) software that analyzes medical images is becoming increasingly prevalent. Unlike earlier generations of AI software, which relied on expert knowledge to identify imaging features, machine learning approaches automatically learn to recognize these features. However, the promise of accurate personalized medicine can only be fulfilled with access to large quantities of medical data from patients. This data could be used for purposes such as predicting disease, diagnosis, treatment optimization, and prognostication. Radiology is positioned to lead development and implementation of AI algorithms and to manage the associated ethical and legal challenges. This white paper from the Canadian Association of Radiologists provides a framework for study of the legal and ethical issues related to AI in medical imaging, related to patient data (privacy, confidentiality, ownership, and sharing); algorithms (levels of autonomy, liability, and jurisprudence); practice (best practices and current legal framework); and finally, opportunities in AI from the perspective of a universal health care system.
Andrew Dennis Smith, Brian C. Allen, Asser Abou Elkassem, Rafah Mresh, Seth T. Lirette, Yujan Shrestha, J. David Giese, Reece Stevens, Dillon Williams, Ahmed Farag, et al.
Journal of Clinical Oncology, Volume 38, pp 2010-2010; https://doi.org/10.1200/jco.2020.38.15_suppl.2010

Abstract:
2010 Background: Current-practice methods to evaluate advanced cancer longitudinal tumor response include manual measurements on digital medical images and dictation of text-based reports that are prone to errors, inefficient, and associated with low inter-observer agreement. The purpose of this study is to compare the effectiveness of advanced cancer longitudinal imaging response evaluation using current practice versus artificial intelligence (AI)-assisted methods. Methods: For this multi-institutional longitudinal retrospective study, body CT images from 120 consecutive patients with multiple serial imaging exams and advanced cancer treated with systemic therapy were independently evaluated by 24 radiologists using current-practice versus AI-assisted methods. For the current practice method, radiologists dictated text-based reports and separately categorized response (CR, PR, SD, and PD). For the AI-assisted method, custom software included AI algorithms for tumor measurement, target and non-target location labelling, and tumor localization at follow up. The AI-assisted software automatically categorized tumor response per RECIST 1.1 calculations and displayed longitudinal data in the form of a graph, table, and key images. All studies were read independently in triplicate for assessment of inter-observer agreement. Comparative effectiveness metrics included: major errors, time of image interpretation, and inter-observer agreement for final response category. Results: Major errors were found in 27.5% (99/360) for current-practice versus 0.3% (1/360) for AI-assisted methods (p < 0.001), corresponding to a 99% reduction in major errors. Average time of interpretation by radiologists was 18.7 min for current-practice versus 9.8 min for AI-assisted method (p < 0.001), with the AI-assisted method being nearly twice as fast. Total inter-observer agreement on final response categorization for radiologists was 52% (62/120) for current-practice versus 75% (90/120) for AI-assisted method (p < 0.001), corresponding to a 45% increase in total inter-observer agreement. Conclusion: In a large multi-institutional study, AI-assisted advanced cancer longitudinal imaging response evaluation significantly reduced major errors, was nearly twice as fast, and increased inter-observer agreement relative to the current-practice method, thereby establishing a new and improved standard of care.
Oleksiy Panych, Donetsk State University of Informatics and Artificial Intelligence
Published: 16 June 2010
Journal: Sententiae
Sententiae, Volume 22, pp 75-85; https://doi.org/10.22240/sent22.01.075

Abstract:
The paper analyzes the system of various lines of historical continuity that link philosophicalsystems of John Stuart Mill and George Berkeley. Berkeley is perceived by Millas the most outstanding figure in the entire previous history of British philosophy. Thishigh estimation gives us a chance to reconsider anew both historical influence of Berkeley’soriginal version of philosophical immaterialism and historically-philosophical rootsof Mill’s own philosophy of consistent phenomenalism.
Oleksiy Panych, Donetsk State University of Informatics and Artificial Intelligence
Published: 16 December 2010
Journal: Sententiae
Sententiae, Volume 23, pp 55-63; https://doi.org/10.22240/sent23.02.055

Abstract:
The paper analyzes the system of various lines of historical continuity that link philosophical systems of John Stuart Mill and George Berkeley. Berkeley is perceived by Mill as the most outstanding figure in the entire previous history of British philosophy. This high estimation gives us a chance to reconsider anew both historical influence of Berkeley’s original version of philosophical immaterialism and historically-philosophical roots of Mill’s own philosophy of consistent phenomenalism.
Gereon GEREON FRAHLINGGoogle Research, 76 Ninth Avenue, New York, NY 10011, USAPIOTR INDYKLaboratory for Computer Science and Artificial Intelligence, Massachusetts Institute of Technology, USACHRISTIAN SOHLERHeinz Nixdorf Institute and Computer Science Departme, Piotr Indyk, Christian Sohler, Neta Barkay, Ely Porat, Bar Shalem, Timothy M. Chan
International Journal of Computational Geometry & Applications, Volume 18, pp 3-28; https://doi.org/10.1142/s0218195908002520

The publisher has not yet granted permission to display this abstract.
N. M. Kurbonov, Research institute for development of digital technologies and artificial intelligence
Mathematical Modeling and Computing, Volume 9, pp 637-646; https://doi.org/10.23939/mmc2022.03.637

Abstract:
The article presents a three-dimensional mathematical model of the gas filtration process in porous media and a numerical algorithm for solving the initial-boundary value problem. The developed model is described using the nonlinear differential equation in partial derivatives with the appropriate initial and boundary conditions. The proposed mathematical apparatus makes it possible to carry out hydrodynamic calculations taking into account changes in the main factors affecting the process under consideration: permeability, porosity, and thickness of layers, gas recovery coefficient, viscosity, etc. Computer implementation of the model provides an opportunity to solve practical problems of analysis and forecasting of the gas production process under various conditions of impact on the productive reservoir, as well as making decisions on the development of existing and design of new gas fields.
Maxim Levin, Lgtu, Stanislav Nagornov, Ekaterina Levina, Lyubov Levina, Irina Kovalenko, All-Russian Scientific and Research Institute of Use of Techniques and Oil Products in Agriculture, Mstu Im. N. E. Bauman
Science in the Central Russia; https://doi.org/10.35887/2305-2538-2022-5-94-101

Abstract:
The paper considers and solves the issues of the formation of algorithms and practical methods for the application of machine vision in the concept of a smart oil storage. The resulting technology will significantly increase the autonomy of intelligent algorithms for managing the tank farm and ensure its safe operation. The fundamental principle of the development of the bionic approach in machine vision is the use of artificial neural networks to recognize objects from the received images. The solution to the problem of visual perception with the help of a computer was the development of artificial neural networks and algorithms for their training. To increase the autonomy of intelligent algorithms for managing a tank farm and ensure its safe operation, it is necessary to develop algorithms and practical methods for using machine vision in the framework of an intelligent oil storage unit. The results of measurements of a steel surface horizontal tank by a neural network based on the image from the camera are obtained by spatial transformation of its image and construction of structural elements using the contour segmentation algorithm. The initial representation, from which the mapping is carried out to some final representation, is usually the representation of the image in the form of an array of raw data - a set of results of physical measurements made for some image from the camera. A picture from a camera contains, in terms of computer vision, a set of physical objects or some fragment of the real world, in our case, a ground horizontal cylindrical tank. The smallest image element with such an initial representation is a pixel containing the result of a single measurement of a given physical quantity. The result obtained was: tank height 2.05 m, tank length 3.17 m, diameter 2.25 m. The measurement error in this case is 4.5%.
Marjolein Knoester, , , Lucette A. J. van der Westerlaken, , Sylvia On Behalf Of The Leiden Artificial Reproductive Techniques Follow-Up Project Veen
Obstetrical & Gynecological Survey, Volume 64, pp 18-19; https://doi.org/10.1097/01.ogx.0000340768.15841.04

Abstract:
The techniques of maternal hormonal stimulation are similar for intracytoplasmic sperm injection (ICSI) and in vitro fertilization (IVF), and both involve fertilization in vitro. However, in contrast to IVF, ICSI involves sperm selection and oocyte penetration and thus bypasses the selection processes that occur during natural conception, raising concerns that conception by ICSI may result in altered health and development of the offspring. This prospective study compared cognitive development of 5–8-year-old singletons born after ICSI during the years 1996–1999 with matched singletons born after IVF or natural conception (NC). The observers were blinded to the mode of conception. The short form of the Revised Amsterdam Child Intelligence Test was used to measure the intelligence quotient (IQ). The unadjusted mean IQ for children conceived by ICSI (n = 83) was 3.9 points lower than that of the IVF controls (n = 82) (103 versus 107; 95% confidence interval [CI], –0.7, 8.4), and the adjusted mean IQ score was 3.6 points lower (95% CI, –0.8, 8.0). There was no difference between ICSF and IVF children in the distribution of IQ-scores (115: ICSI 24% versus IVF 32% [unadjusted: P = .268]). The unadjusted mean IQ score of children conceived by ICSI (n = 86) was 6.8 points lower than that of children conceived naturally (NC; n = 85)(XX versus XX; 95% CI, 2.0, 11.6); the adjusted mean IQ score was 5.6 points lower [95% CI, 0.9, 10.3] and was unaffected by additional adjustment for prematurity and other covariables (P< .067). Compared with NC children, the total IQ distribution among ICSI children was shifted to lower IQ scores (>115: ICSI 24% versus NC 40%; 85–115: ICSI 64% versus NC 54%; and <85: ICSI 12% versus NC 6% [unadjusted, P = .019]). This small study found that IQ scores were lower in 5–8-year-old children conceived by ICSI than in children conceived by IVF or naturally. The greatest difference was between ICSI and NC children. The clinical significance of these findings is unclear.
Tatiana A. Panteleeva, Institute of World Civilizations
Abstract:
Subject/Topic. The article is devoted to the study of the possibilities and threats of using scientific intelligence in the business foresight and its impact on the business potential of the business in the short and long term. Methodology. In the process of writing the article, general scientific and philosophical methods of knowledge were used, as well as special economic methods based on them. Especially, the articles of the object of research – artificial intelligence – as the current process necessitated the use of problem-chronological and historical-genetic methods, which made it possible to distinguish the main stages of the formation of ideas, concepts, theories and methods for the use of artificial intelligence in business foresight, and the historical-genetic method showed the inseparability and intersectability from one stage to another of the development of the conceptual and methodological apparatus of the object of scientific research. Results. Currently, in business practice, artificial intelligence is used as a foresight tool very individually, since the complexity of its development and significant investments in the landscape infrastructure of its functioning form objective barriers to its rapid spread in the business environment. Currently, the following models of artificial intelligence are used in the business force: anthropocentric, hybrid, instrumental, machine-centric. According to the above calculations, starting from 2020, active growth is expected in the segment of business and IT services using artificial intelligence, it is also expected to increase spending on R&D projects in the field of development of products with artificial intelligence, and the most forward-specific from the point of view of investing capital and development as part of their own business model of AI directions on the horizon 2018-2025 are technologies for remote access (VDI, BKC, online communications, control), AI/ML (artificial intelligence, machine learning), VR/AR (virtual and augmented reality). Conclusions/Significance. In general, in 2020 compared to 2019, the optimism and motivation of the business to introduce artificial intelligence clearly showed a decline, and it should also be noted that the goals set by managers have become more «grounded»: in 2020, 45% spoke in favor of using artificial intelligence as a means of forming their own Big Data libraries, another 45% – for the integration of the artificial intelligence mechanism and existing systems for analysis and collection of information, however, a modern business strategy is not possible without processing huge amounts of customer information, and given their weak structuring and localization in multiple sources, the speed and quality of their processing and interpretation without the use of machine learning mechanisms became economically impractical. Application. The results of the scientific research will be useful both for educational purposes for students and readers interested in the use of artificial in-tech in business management, and for practitioners who plan to use artificial intelligence in foresight business processes.
, Research Centre of Industrial Problems of Development of NAS of Ukraine, , M. M. Khaustov, V. A. Zinchenko
Abstract:
Artificial intelligence belongs to the rapidly developing technological spheres and in the future will have significant consequences for both national security and defense capability. For many countries of the world, artificial intelligence has become one of the key priorities for the development of the defense complex. The article is aimed at identifying the main directions of the use of artificial intelligence in ensuring the country’s defense capability. The methodological basis of the article is literature review and analysis of general trends in the development of artificial intelligence technologies. The evolution of artificial intelligence was researched, which allowed to determine the following stages: the emergence of expert systems; development of machine and statistical training; development of the conception and technology of deep learning and contextual adaptation. Gartner’s predictions on the development of artificial intelligence technologies are analyzed and it is defined that the influence of deep learning methods, neuromorphic computing that simulate the neural structure and work of the human brain, competitive machine learning methods, analytics methods known as «small data» and «broad data» will increase in the future. The article analyzes the report «Science & Technology Trends 2020-2040» by the NATO Science and Technology Organization, substantiating the importance of using artificial intelligence to develop military capabilities, form strategic priorities in the sphere of weapons development and political decision-making. It is determined that the formation and development of military potential is expected to experience an increased impact of embedded artificial intelligence in the nuclear, aerospace, cybernetic technologies, technologies for the development of new materials and biotechnology; virtual/augmented reality; quantum computing; research of materials, etc. The authors determine possible directions of use of artificial intelligence in such areas as: C4ISR, weapons and their effective use, UxV, capability planning, CBRN, military medicine, logistics, cyber and information space, etc. The advantages of the use of artificial intelligence for the development of the defense capability of the world countries and significant threats that require further research of technologies and methods of artificial intelligence along with directions of their practical use in the sphere of defense and protection are identified.
Adam Buday, University of Zilina, Viliam Ažaltovič
Published: 1 January 2021
Conference: Práce a štúdie
Abstract:
The aim of this paper is the analysis of the contemporary state of implementation of artificial intelligence in the area of unmanned aerial vehicles (UAV), and a proposal of further use of artificial intelligence systems in this area in the future. We analyse three essential areas in which artificial intelligence systems are currently being implemented to some extend – path following, object detection and tracking, and anti—collision systems. In each mentioned area we apply different solution methods, technical requirements, but also advantages and disadvantages of those solutions. We present an overview of artificial intelligence as a scientific branch. Finally, we present an overview of how artificial intelligence in the field of UAV could be implemented in the future, based on an analysis of the current state and direction of research and development in the present. We describe the possibilities of use artificial intelligence systems in two areas that are currently receiving the most attention at concept level, namely the flight of autonomous UAV swarms and the improvement of communication and data exchange between individual UAVs using artificial intelligence.
Tatyana E. Sushina, of the Department of Criminal Procedure Law of the Kutafin Moscow State Law University (MSAL) PhD (Law), Andrey A. Sobenin
Published: 10 June 2020
Russian Investigator, Volume 6; https://doi.org/10.18572/1812-3783-2020-6-21-25

Atajanov Azizbek Abdimalikovich, Postgraduate student of the Supreme School of Judges under the Supreme Judicial Council of the Republic of Uzbekistan
The American Journal of Political Science Law and Criminology, Volume 04, pp 38-44; https://doi.org/10.37547/tajpslc/volume04issue02-07

Abstract:
The article examines the experience of foreign countries in the application of digital technologies in the justice system. It is determined that in most foreign countries, the development of e-justice systems is considered as an integral component of judicial and legal reform. The use of Artificial Intelligence in judicial practice based on computational procedures is a very feasible program and for other countries positioning itself as a legal one. It is determined that the use of Artificial Intelligence will avoid the use of traditional methods of resolving legal disputes. Artificial Intelligence is based not on situational logic, but on computational procedures. It is proved that the use of modern technologies in the justice system will contribute to improving the effectiveness of judicial reforms, ensuring its effectiveness and objectivity. It will simplify the legal process and de-bureaucratize it, reduce court costs and facilitate access to justice. It is noted that the purpose of using Artificial Intelligence in the judicial system is to create a tool to help in decision-making to reduce, if necessary, the excessive variability of decisions made by courts in the name of observing the principle of equality of citizens before the law. The use of Artificial Intelligence technology based on modern technologies will fundamentally change the judicial process and reduce the workload of the judicial staff.
V. A. Savchenko, State University of Telecommunications, O. D. Shapovalenko
Modern Information Security, Volume 44; https://doi.org/10.31673/2409-7292.2020.040611

Abstract:
The article examines the key technologies of artificial intelligence in order to use them to ensure the protection of information. It is shown that currently there is no general concept of artificial intelligence in cybersecurity, the most important methods of artificial intelligence that can be used in cybersecurity are not defined, and the role that these methods can play to protect organizations in cyberspace has not been established. As a key idea for the use of artificial intelligence in cybersecurity, the use of technologies and methods that facilitate the detection and response to threats using cyber attack statistics sets has been proposed. Priority areas for the use of artificial intelligence are network security and data protection.
Abdelaziz Matani, Study group, Taha Al-Jody, David Benson, Study Group - University of Huddersfield International Study Centre
Journal of Assessment Learning and Teaching in International Education, Volume 1, pp 33-43; https://doi.org/10.34255/jaltie.v1i1.26

Sardor Bazarov, Tashkent State University of Law
Published: 19 July 2022
Journal: Jurisprudence
Abstract:
In this article, the author analyzes the main functions of artificial intelligence in courts: such as organizing data, consulting, and forecasting. In addition, this article discusses the principles of applying artificial intelligence in judicial practice from a scientific point of view: such as ethical principles, the principle of respect for human rights, the principle of equality, the principle of data security, the principle of transparency, the principle of user control over artificial intelligence. Along with the above, the author thoroughly studied the effective use of artificial intelligence in the judicial system. In conclusion, the author puts proposals forward for further improvement of the use of artificial intelligence in courts that uniformity of court practice and transparency of court documents will be ensured. For citizens, artificial intelligence will become a quality tool for finding and evaluating the outcome of court proceedings using the latest advances in IT. This allows the plaintiff to predict the likelihood of the success of the application being sued and to make a decision on that basis without going to court.
B.R. Kapovsky, Gorbatov Research Center for Food Systems, P.I. Plyasheshnik, V.A. Stefanova, Moscow State University of Food Production
Published: 1 January 2020
Vladislav V. Arkhipov, Anastasiia V. Gracheva, Victor B. Naumov, Saint Petersburg State University, Institute of State and Law of the Russian Academy of Sciences
Published: 1 February 2023
Journal: Zakon
Abstract:
The article examines the identification of artificial intelligence and robotic systems. The authors conducted an analysis of regulatory and recommendatory acts of foreign countries, which demonstrates the main trends of the use of identification in the framework of legal regulation. Empirical material, based on which the study was conducted, includes information on identification on the territory of the USA, European Union, China, Japan, South Korea, UAE, Singapore, Canada, UK, Germany. As a result, the authors identified not only the main approaches, goals and objectives of the use of identification of artificial intelligence and robots, but also structured the legal principles derived from them, which are interconnected with the branch of information law.
Тymоfijeva N, International Scientific and Training Center for Information Technologies and Systems of National Academy of Sciences of Ukraine and Ministry of Education and Science of Ukraine
Artificial Intelligence, Volume 27, pp 193-201; https://doi.org/10.15407/jai2022.01.193

Abstract:
To create artificial intelligence, it is necessary to identify the properties of natural and develop a way to model it. There are many definitions of artificial intelligence in the literature, but there is no exact definition of this science yet. Differ-ent authors model natural intelligence differently. For example, artificial intelligence is defined as the ability of a digital computer to respond to information coming to its input devices, almost as a certain person reacts in the same infor-mation environment. This approach is based on the principle of self-organization of the model and is called heuristic. Human intelligence is also seen as an intuitive system. The creative process is accompanied by various manifestations of emotions, and decision-making in natural in-telligence is carried out in conditions of uncertainty of various kinds. Studies show that in the problems of this class it is related to: 1) incomplete input and current information; 2) with fuzzy input information; 3) with vaguely developed rules for processing and evaluating information. Significant combinatorial spaces, in particular significant information spaces, were used to model the dynamics of human thinking. The latter has a combinatorial nature and exists in two states: tranquility (convolute) and dynamics (deployed), which deployed from convolute. Collapsed is given by an information sign that contains the properties of the expanded space. Information is primarily related to the functioning of the human brain and is in the subconscious or consciousness in the form of images, fragments of speech and so on. The transfer of information (thoughts) is car-ried out with the help of deployed information space through the speech space, through gestures, movements, through writing, graphics. Depending on the type of uncertainty, the classification of natural intelligence is given. We believe that the con-cept of intelligence is associated with such operations as information processing and evaluation. Based on this, human intelligence is conditionally divided into three levels: 1) a person follows the rules, which are clearly formulated and described without analysis of their accuracy (learning rules); 2) the individual analyzes information for accuracy and develops its own rules of conduct under different conditions (rules of self-study); 3) the ability for independent of exist-ing rules of analysis, processing and evaluation of information for accuracy (rules of intuition). Partial realization of artificial intelligence is carried out through the use of self-tuning algorithms and modeling of self-organization processes in nature.
Sangyeol Kim, Hyuntai Kim, The Korean Society Of Culture And Convergence
Abstract:
This study, the technical concept of composition using artificial intelligence, which is attracting attention as a major field of the 4th industrial revolution, and the development and service cases of major platforms are studied. The purpose of this study was to find out the plan and development plan. The concept of artificial intelligence was explained, and the concepts of LSTM and RNN, which are representative data generation models used in artificial intelligence composition, and GAN, which is an adversarial generation model, were studied, and the data generation process and algorithm of artificial intelligence were investigated. In addition, by analyzing the case of a music creation platform using artificial intelligence, which is a representative use case of the current artificial intelligence composition, the status of technology and service was investigated, and the creation process using the artificial intelligence composition platform was investigated. In addition, by analyzing the case of a music creation platform using artificial intelligence, which is a representative use case of the current artificial intelligence composition, the status of technology and service was investigated, and the creation process using the artificial intelligence composition platform was investigated.
Zhangozha A.R., Taras Shevchenko National University of Kyiv
Artificial Intelligence, Volume 25, pp 7-13; https://doi.org/10.15407/jai2020.02.007

Abstract:
On the example of the online game Akinator, the basic principles on which programs of this type are built are considered. Effective technics have been proposed by which artificial intelligence systems can build logical inferences that allow to identify an unknown subject from its description (predicate). To confirm the considered hypotheses, the terminological analysis of definition of the program "Akinator" offered by the author is carried out. Starting from the assumptions given by the author's definition, the article complements their definitions presented by other researchers and analyzes their constituent theses. Finally, some proposals are made for the next steps in improving the program. The Akinator program, at one time, became one of the most famous online games using artificial intelligence. And although this was not directly stated, it was clear to the experts in the field of artificial intelligence that the program uses the techniques of expert systems and is built on inference rules. At the moment, expert systems have lost their positions in comparison with the direction of neural networks in the field of artificial intelligence, however, in the case considered in the article, we are talking about techniques using both directions – hybrid systems. Games for filling semantics interact with the user, expanding their semantic base (knowledge base) and use certain strategies to achieve the best result. The playful form of such semantics filling programs is beneficial for researchers by involving a large number of players. The article examines the techniques used by the Akinator program, and also suggests possible modifications to it in the future. This study, first of all, focuses on how the knowledge base of the Akinator program is built, it consists of incomplete sets, which can be filled and adjusted as a result of further iterations of the program launches. It is important to note our assumption that the order of questions used by the program during the game plays a key role, because it determines its strategy. It was identified that the program is guided by the principles of nonmonotonic logic – the assumptions constructed by the program are not final and can be rejected by it during the game. The three main approaches to acquisite semantics proposed by Jakub Šimko and Mária Bieliková are considered, namely, expert work, crowdsourcing and machine learning. Paying attention to machine learning, the Akinator program using machine learning to build an effective strategy in the game presents a class of hybrid systems that combine the principles of two main areas in artificial intelligence programs – expert systems and neural networks.
O.O. Basov, Saint-Petersburg State University of Aerospace Instrumentation
Published: 29 December 2022
Abstract:
Currently there is no special legislative regulation in the Russian Federation that takes into account the specifics of the use of artificial intelligence technologies. There are acute issues of working out the mechanisms of civil liability in the event of harm caused by artificial intelligence systems that have a high degree of autonomy in their decision-making, including in terms of ways to compensate for the harm caused by the actions of such systems. Based on the analysis of foreign experience in the legal regulation of civil liability for harm caused by the activities of artificial intelligence systems, and the experience of forming the legal framework in the field of artificial intelligence in Russia, it is proposed to apply tort liability to activities related to the use of artificial intelligence systems, and compensation for damage caused by such activities to be carried out from the funds of a specially created system of compulsory civil liability insurance for developers and owners of artificial intelligence systems, including a specially introduced entity – a specialized operator of artificial intelligence systems, and provide for an insurance deductible due to the autonomy (unpredictability) of the actions of artificial intelligence systems intellect.
Page of 12
Articles per Page
by
Show export options
  Select all
Back to Top Top