AI Perspectives

Journal Information
EISSN : 2523-398X
Total articles ≅ 6
Filter:

Articles in this journal

, Bernhard Sick
Published: 16 July 2021
AI Perspectives, Volume 3, pp 1-16; doi:10.1186/s42467-021-00009-8

Abstract:
Catastrophic forgetting means that a trained neural network model gradually forgets the previously learned tasks when being retrained on new tasks. Overcoming the forgetting problem is a major problem in machine learning. Numerous continual learning algorithms are very successful in incremental learning of classification tasks, where new samples with their labels appear frequently. However, there is currently no research that addresses the catastrophic forgetting problem in regression tasks as far as we know. This problem has emerged as one of the primary constraints in some applications, such as renewable energy forecasts. This article clarifies problem-related definitions and proposes a new methodological framework that can forecast targets and update itself by means of continual learning. The framework consists of forecasting neural networks and buffers, which store newly collected data from a non-stationary data stream in an application. The changed probability distribution of the data stream, which the framework has identified, will be learned sequentially. The framework is called CLeaR (Continual Learning for Regression Tasks), where components can be flexibly customized for a specific application scenario. We design two sets of experiments to evaluate the CLeaR framework concerning fitting error (training), prediction error (test), and forgetting ratio. The first one is based on an artificial time series to explore how hyperparameters affect the CLeaR framework. The second one is designed with data collected from European wind farms to evaluate the CLeaR framework’s performance in a real-world application. The experimental results demonstrate that the CLeaR framework can continually acquire knowledge in the data stream and improve the prediction accuracy. The article concludes with further research issues arising from requirements to extend the framework.
Thomas M. Roehr, Daniel Harnack, Hendrik Wöhrle, Felix Wiebe, Moritz Schilling, Oscar Lima, Malte Langosz, Shivesh Kumar, Sirko Straube, Frank Kirchner
Published: 5 July 2021
AI Perspectives, Volume 3; doi:10.1186/s42467-021-00008-9

Abstract:
In this paper we introduce Q-Rock, a development cycle for the automated self-exploration and qualification of robot behaviors. With Q-Rock, we suggest a novel, integrative approach to automate robot development processes. Q-Rock combines several machine learning and reasoning techniques to deal with the increasing complexity in the design of robotic systems. The Q-Rock development cycle consists of three complementary processes: (1) automated exploration of capabilities that a given robotic hardware provides, (2) classification and semantic annotation of these capabilities to generate more complex behaviors, and (3) mapping between application requirements and available behaviors. These processes are based on a graph-based representation of a robot’s structure, including hardware and software components. A central, scalable knowledge base enables collaboration of robot designers including mechanical, electrical and systems engineers, software developers and machine learning experts. In this paper we formalize Q-Rock’s integrative development cycle and highlight its benefits with a proof-of-concept implementation and a use case demonstration.
Published: 7 October 2020
AI Perspectives, Volume 2, pp 1-9; doi:10.1186/s42467-020-00007-2

Abstract:
Alan Turing’s 1950 paper, “Computing Machinery and Intelligence,” contains much more than its proposal of the “Turing Test.” Turing imagined the development of what we today call AI by a process akin to the education of a child. Thus, while Turing anticipated “machine learning,” his prescience brings to the foreground the yet unsolved problem of how humans might teach or shape AIs to behave in ways that align with moral standards. Part of the teaching process is likely to entail AIs’ absorbing lessons from human writings. Natural language processing tools are one of the ways computer systems extract knowledge from texts. An example is given of how one such technique, Latent Dirichlet Allocation, can draw out the most prominent themes from works of classical political theory.
Published: 4 September 2020
AI Perspectives, Volume 2, pp 1-12; doi:10.1186/s42467-020-00006-3

Abstract:
This paper presents a perspective on AI that starts with going back to early work on this topic originating in theoretical work of Alan Turing. The argument is made that the core idea - that leads to the title of this paper - of these early thoughts are still relevant today and may actually provide a starting point to make the transition from today functional AI solutions towards integrative or general AI.
, , Oliver Thomas
Published: 26 July 2020
AI Perspectives, Volume 2, pp 1-15; doi:10.1186/s42467-020-00005-4

Abstract:
Although Artificial Intelligence (AI) has become a buzzword for self-organizing IT applications, its relevance to software engineering has hardly been analyzed systematically. This study combines a systematic review of previous research in the field and five qualitative interviews with software developers who use or want to use AI tools in their daily work routines, to assess the status of development, future development potentials and equally the risks of AI application to software engineering. The study classifies the insights in the software development life cycle. The analysis results that major achievements and future potentials of AI are a) the automation of lengthy routine jobs in software development and testing using algorithms, e.g. for debugging and documentation, b) the structured analysis of big data pools to discover patterns and novel information clusters and c) the systematic evaluation of these data in neural networks. AI thus contributes to speed up development processes, realize development cost reductions and efficiency gains. AI to date depends on man-made structures and is mainly reproductive, but the automation of software engineering routines entails a major advantage: Human developers multiply their creative potential when using AI tools effectively.
, Patrick Trampert, Faysal Boughorbel, Janis Sprenger, Matthias Klusch, Klaus Fischer, Christian Kübel, Philipp Slusallek
Published: 3 September 2019
AI Perspectives, Volume 1, pp 1-12; doi:10.1186/s42467-019-0002-0

Abstract:
Hierarchical neural networks with large numbers of layers are the state of the art for most computer vision problems including image classification, multi-object detection and semantic segmentation. While the computational demands of training such deep networks can be addressed using specialized hardware, the availability of training data in sufficient quantity and quality remains a limiting factor. Main reasons are that measurement or manual labelling are prohibitively expensive, ethical considerations can limit generating data, or a phenomenon in questions has been predicted, but not yet observed. In this position paper, we present the Digital Reality concept are a structured approach to generate training data synthetically. The central idea is to simulate measurements based on scenes that are generated by parametric models of the real world. By investigating the parameter space defined of such models, training data can be generated in a controlled way compared to data that was captured from real world situations. We propose the Digital Reality concept and demonstrate its potential in different application domains, including industrial inspection, autonomous driving, smart grid, and microscopy research in material science and engineering.
Published: 3 September 2019
AI Perspectives, Volume 1, pp 1-2; doi:10.1186/s42467-019-0001-1

Abstract:
Not applicable. Not applicable. Not applicable. FK was the mayor contributor in writing the manuscript and read and approved the final manuscript. Correspondence to Frank Kirchner. The author declares that he has no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Reprints and Permissions 22 July 2019 25 July 2019 03 September 2019 https://doi.org/10.1186/s42467-019-0001-1
Published: 3 September 2019
AI Perspectives, Volume 1, pp 1-7; doi:10.1186/s42467-019-0003-z

Abstract:
This position paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (i) how to foster research and development efforts toward socially beneficial applications, and (ii) how to take into account and mitigate the human and social risks of AI systems.
Back to Top Top