Refine Search

New Search

Results in Journal Frontiers in Robotics and AI: 926

(searched for: journal_id:(2373748))
Page of 19
Articles per Page
by
Show export options
  Select all
Michael Unger, Johann Berger,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.664622

Abstract:
Image guidance is a common methodology of minimally invasive procedures. Depending on the type of intervention, various imaging modalities are available. Common imaging modalities are computed tomography, magnetic resonance tomography, and ultrasound. Robotic systems have been developed to enable and improve the procedures using these imaging techniques. Spatial and technological constraints limit the development of versatile robotic systems. This paper offers a brief overview of the developments of robotic systems for image-guided interventions since 2015 and includes samples of our current research in this field.
Kosmas Dimitropoulos, Petros Daras, Sotiris Manitsaris, Frederic Fol Leymarie, Sylvain Calinon
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.712521

Abstract:
Editorial on the Research Topic Artificial Intelligence and Human Movement in Industries and Creation Recent advances in human motion sensing technologies and machine learning have enhanced the potential of Artificial Intelligence to improve our quality of life, increase productivity and reshape multiple industries, including cultural and creative industries. In order to achieve this goal, humans must remain at the center of Artificial Intelligence and AI should learn from humans and collaborate effectively with them. Human-Centred Artificial Intelligence (HAI) is expected to create new opportunities and challenges in the future, which cannot yet be foreseen. Any type of programmable entity (e.g., robots, computers, autonomous vehicles, drones, Internet of Things, etc.) will have different layers of perception and sophisticated HAI algorithms that will detect human intentions and behaviors (Psaltis et al., 2017) and learn continuously from them. Thus, every single intelligent system will be able to capture human motions, analyze them (Zhang et al., 2019), detect poses and recognize gestures (Chatzis et al., 2020; Stergioulas et al., 2021) and activities (Papastratis et al., 2020; Papastratis et al., 2021; Konstantinidis et al., 2021), including facial expressions and gaze (Bek et al., 2020), enabling natural collaboration with humans. Different sensing technologies, such as optical Mocap systems, wearable inertial sensors, RGB or depth cameras and other modality type sensors, are employed for capturing human movement in the scene and transforming this information into a digital representation. Most of the researchers usually focus on the use of a single modality sensor - due to the simplicity and low cost of the final system - and the design of either conventional machine learning algorithms or complex deep learning network architectures for analyzing human motion data (Konstantinidis et al., 2018; Konstantinidis et al., 2020). Such cost-effective approaches have been applied to a wide range of application domains, including entertainment (Kaza et al., 2016; Baker, 2020), health (Dias et al.; Konstantinidis et al., 2021), education (Psaltis et al., 2017; Stefanidis et al., 2019), sports (Tisserand et al., 2017), robotics (Jaquier et al., 2020; Gao et al., 2021), art and cultural heritage (Dimitropoulos et al., 2018), showing the great potential of AI technology. Based on the aforementioned, it is evident that HAI is currently at the center of scientific debates and technological exhibitions. Developing and deploying intelligent machines is definitely both an economic challenge (e.g., flexibility, simplification, ergonomy) as well as a societal challenge (e.g., safety, transparency), not only from a factory perspective, but also for the real-world in general. The papers in this Research Topic adopt different sensing technologies, such as depth sensors, inertial suits, IMU sensors and force-sensing resistors (FSRs) to capture human movement, while they present diverse approaches for modeling the temporal data. More specifically, Sakr et al. investigate the feasibility of employing FSRs worn on the arm to measure the Force Myography (FMG) signals for isometric force/torque estimation. A two-stage regression strategy is employed to enhance the performance of the FMG bands, where three regression algorithms including general regression neural network (GRNN), support vector regression (SVR), and random forest regression (RF) models are used, respectively, in the first stage, while GRNN is used in the second stage. Two cases are considered to explore the performance of the FMG bands in estimating: (a) 3-DoF force and 3-DoF torque at once and (b) 6-DoF force and torque. In addition, the impact of sensor placement and the spatial coverage of FMG measurements is studied. Manitsaris et al. propose a multivariate time series approach for the recognition of professional gestures and for the forecasting of their trajectories. More specifically, the authors introduce a gesture operational model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. The assumptions of this model are then translated into an equation system for each body entity through State-Space modeling. The proposed method is evaluated on four industrial datasets that contain gestures, commands and actions. A comprehensive review on machine learning approaches for motor learning is presented by Caramiaux et al. The review outlines existing machine learning models for motor learning and their adaptation capabilities and identifies three types of adaptation: Parameter adaptation in probabilistic models, transfer and meta-learning in deep neural networks, and planning adaptation by reinforcement learning. Dias et al. present an innovative and personalized motor assessment tool capable of monitoring and tracking the behavioral change of Parkinson’s disease (PD) patients (mostly related to posture, walking/gait, agility, balance, and coordination impairments). The proposed assessment tool is part of the i-Prognosis Game Suit, which was developed within the framework of the i-Prognosis EU funded project (www.i-prognosis.eu). Six different motor assessments tests integrated in the iPrognosis Games have been designed and developed based on the UPDRS Part III examination. The efficiency of the proposed assessment tests to reflect the motor skills status, similarly to the UPDRS Part III items, is validated via 27 participants with early and moderate PD. Bikias et al. explore the use of IMU sensors for the detection of Freezing-of-Gait (FoG) Episodes in Parkinson’s disease Patients and present a novel deep learning method. The study investigates the feasibility of a single wrist-based inertial measurement unit (IMU) for effectively...
Corrigendum
, Michael Lackner, Sonja Laicher, Rüdiger Neumann, Zoltán Major
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.729549

Abstract:
A Corrigendum on Design of an Inkjet-Printed Rotary Bellows Actuator and Simulation of Its Time-Dependent Deformation Behavior by Dämmer, G., Lackner, M., Laicher, S., Neumann, R., and Major, Z. (2021). Front. Robot. AI 8:663158. doi:10.3389/frobt.2021.663158 In the original article, there was a mistake in Figure 13 as published. The angles were too small by a factor of 10 in Figure 13. However, the angular values in the text and all other figures are correct. The corrected Figure 13 appears below. FIGURE 13. Time-dependent angular position of inkjet-printed rotary bellows actuators in experiments (solid lines) and simulation (dashed lines). Each solid curve is an average of five experiments with four bellows chambers at a particular pressure level. The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated. Keywords: bellow actuator, printed robotics, design for additive manufacture, multi-material 3D printing, soft pneumatic actuator, time-dependent materials, printed elastomer, PolyJet elastomers Citation: Dämmer G, Lackner M, Laicher S, Neumann R and Major Z (2021) Corrigendum: Design of an Inkjet-Printed Rotary Bellows Actuator and Simulation of its Time-Dependent Deformation Behavior. Front. Robot. AI 8:729549. doi: 10.3389/frobt.2021.729549 Received: 23 June 2021; Accepted: 24 June 2021;Published: 09 July 2021. Edited and reviewed by: Copyright © 2021 Dämmer, Lackner, Laicher, Neumann and Major. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Gabriel Dämmer, [email protected]
Peter Lloyd, Zaneta Koszowska, Michele Di Lecce, Onaizah Onaizah, James H. Chandler, Pietro Valdastri
Published: 8 July 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.715662

Abstract:
Soft continuum manipulators have the potential to replace traditional surgical catheters; offering greater dexterity with access to previously unfeasible locations for a wide range of interventions including neurological and cardiovascular. Magnetically actuated catheters are of particular interest due to their potential for miniaturization and remote control. Challenges around the operation of these catheters exist however, and one of these occurs when the angle between the actuating field and the local magnetization vector of the catheter exceeds 90°. In this arrangement, deformation generated by the resultant magnetic moment acts to increase magnetic torque, leading to potential instability. This phenomenon can cause unpredictable responses to actuation, particularly for soft, flexible materials. When coupled with the inherent challenges of sensing and localization inside living tissue, this behavior represents a barrier to progress. In this feasibility study we propose and investigate the use of helical fiber reinforcement within magnetically actuated soft continuum manipulators. Using numerical simulation to explore the design space, we optimize fiber parameters to enhance the ratio of torsional to bending stiffness. Through bespoke fabrication of an optimized helix design we validate a single, prototypical two-segment, 40 mm × 6 mm continuum manipulator demonstrating a reduction of 67% in unwanted twisting under actuation.
, Leif Azzopardi, Martin Halvey, Mateusz Dubiel
Published: 8 July 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.642201

Abstract:
Collaborative virtual agents help human operators to perform tasks in real-time. For this collaboration to be effective, human operators must appropriately trust the agent(s) they are interacting with. Multiple factors influence trust, such as the context of interaction, prior experiences with automated systems and the quality of the help offered by agents in terms of its transparency and performance. Most of the literature on trust in automation identified the performance of the agent as a key factor influencing trust. However, other work has shown that the behavior of the agent, type of the agent’s errors, and predictability of the agent’s actions can influence the likelihood of the user’s reliance on the agent and efficiency of tasks completion. Our work focuses on how agents’ predictability affects cognitive load, performance and users’ trust in a real-time human-agent collaborative task. We used an interactive aiming task where participants had to collaborate with different agents that varied in terms of their predictability and performance. This setup uses behavioral information (such as task performance and reliance on the agent) as well as standardized survey instruments to estimate participants’ reported trust in the agent, cognitive load and perception of task difficulty. Thirty participants took part in our lab-based study. Our results showed that agents with more predictable behaviors have a more positive impact on task performance, reliance and trust while reducing cognitive workload. In addition, we investigated the human-agent trust relationship by creating models that could predict participants’ trust ratings using interaction data. We found that we could reliably estimate participants’ reported trust in the agents using information related to performance, task difficulty and reliance. This study provides insights on behavioral factors that are the most meaningful to anticipate complacent or distrusting attitudes toward automation. With this work, we seek to pave the way for the development of trust-aware agents capable of responding more appropriately to users by being able to monitor components of the human-agent relationships that are the most salient for trust calibration.
, Maha Elgarf, Giulia Perugia, Ana Paiva, Christopher Peters, Ginevra Castellano
Published: 8 July 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.652035

Abstract:
In educational scenarios involving social robots, understanding the way robot behaviors affect children’s motivation to achieve their learning goals is of vital importance. It is crucial for the formation of a trust relationship between the child and the robot so that the robot can effectively fulfill its role as a learning companion. In this study, we investigate the effect of a regulatory focus design scenario on the way children interact with a social robot. Regulatory focus theory is a type of self-regulation that involves specific strategies in pursuit of goals. It provides insights into how a person achieves a particular goal, either through a strategy focused on “promotion” that aims to achieve positive outcomes or through one focused on “prevention” that aims to avoid negative outcomes. In a user study, 69 children (7–9 years old) played a regulatory focus design goal-oriented collaborative game with the EMYS robot. We assessed children’s perception of likability and competence and their trust in the robot, as well as their willingness to follow the robot’s suggestions when pursuing a goal. Results showed that children perceived the prevention-focused robot as being more likable than the promotion-focused robot. We observed that a regulatory focus design did not directly affect trust. However, the perception of likability and competence was positively correlated with children’s trust but negatively correlated with children’s acceptance of the robot’s suggestions.
Yufei Hao,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.632006

Abstract:
Grasping and manipulation are challenging tasks that are nonetheless critical for many robotic systems and applications. A century ago, robots were conceived as humanoid automata. While conceptual at the time, this viewpoint remains influential today. Many robotic grippers have been inspired by the dexterity and functionality of the prehensile human hand. However, multi-fingered grippers that emulate the hand often integrate many kinematic degrees-of-freedom, and thus complex mechanisms, which must be controlled in order to grasp and manipulate objects. Soft fingers can facilitate grasping through intrinsic compliance, enabling them to conform to diverse objects. However, as with conventional fingered grippers, grasping via soft fingers involves challenges in perception, computation, and control, because fingers must be placed so as to achieve force closure, which depends on the shape and pose of the object. Emerging soft robotics research on non-anthropomorphic grippers has yielded new techniques that can circumvent fundamental challenges associated with grasping via fingered grippers. Common to many non-anthropomorphic soft grippers are mechanisms for morphological deformation or adhesion that simplify the grasping of diverse objects in different poses, without detailed knowledge of the object geometry. These advantages may allow robots to be used in challenging applications, such as logistics or rapid manufacturing, with lower cost and complexity. In this perspective, we examine challenges associated with grasping via anthropomorphic grippers. We describe emerging soft, non-anthropomorphic grasping methods, and how they may reduce grasping complexities. We conclude by proposing several research directions that could expand the capabilities of robotic systems utilizing non-anthropomorphic grippers.
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.712427

Abstract:
Proponents of welcoming robots into the moral circle have presented various approaches to moral patiency under which determining the moral status of robots seems possible. However, even if we recognize robots as having moral standing, how should we situate them in the hierarchy of values? In particular, who should be sacrificed in a moral dilemma–a human or a robot? This paper answers this question with reference to the most popular approaches to moral patiency. However, the conclusions of a survey on moral patiency do not consider another important factor, namely the law. For now, the hierarchy of values is set by law, and we must take that law into consideration when making decisions. I demonstrate that current legal systems prioritize human beings and even force the active protection of humans. Recent studies have suggested that people would hesitate to sacrifice robots in order to save humans, yet doing so could be a crime. This hesitancy is associated with the anthropomorphization of robots, which are becoming more human-like. Robots’ increasing similarity to humans could therefore lead to the endangerment of humans and the criminal responsibility of others. I propose two recommendations in terms of robot design to ensure the supremacy of human life over that of humanoid robots.
, Karel Van Den Bosch, Mark Neerincx
Published: 6 July 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.692811

Abstract:
Becoming a well-functioning team requires continuous collaborative learning by all team members. This is called co-learning, conceptualized in this paper as comprising two alternating iterative stages: partners adapting their behavior to the task and to each other (co-adaptation), and partners sustaining successful behavior through communication. This paper focuses on the first stage in human-robot teams, aiming at a method for the identification of recurring behaviors that indicate co-learning. Studying this requires a task context that allows for behavioral adaptation to emerge from the interactions between human and robot. We address the requirements for conducting research into co-adaptation by a human-robot team, and designed a simplified computer simulation of an urban search and rescue task accordingly. A human participant and a virtual robot were instructed to discover how to collaboratively free victims from the rubbles of an earthquake. The virtual robot was designed to be able to real-time learn which actions best contributed to good team performance. The interactions between human participants and robots were recorded. The observations revealed patterns of interaction used by human and robot in order to adapt their behavior to the task and to one another. Results therefore show that our task environment enables us to study co-learning, and suggest that more participant adaptation improved robot learning and thus team level learning. The identified interaction patterns can emerge in similar task contexts, forming a first description and analysis method for co-learning. Moreover, the identification of interaction patterns support awareness among team members, providing the foundation for human-robot communication about the co-adaptation (i.e., the second stage of co-learning). Future research will focus on these human-robot communication processes for co-learning.
Rodrigo Moreno, Andres Faiña
Published: 5 July 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.699814

Abstract:
This work presents a platform for evolution of morphology in full cycle reconfigurable hardware: The EMERGE (Easy Modular Embodied Robot Generator) modular robot platform. Three parts necessary to implement a full cycle process, i.e., assembling the modules in morphologies, testing the morphologies, disassembling modules and repeating, are described as a previous step to testing a fully autonomous system: the mechanical design of the EMERGE module, extensive tests of the modules by first assembling them manually, and automatic assembly and disassembly tests. EMERGE modules are designed to be easy and fast to build, one module is built in half an hour and is constructed from off-the-shelf and 3D printed parts. Thanks to magnetic connectors, modules are quickly attached and detached to assemble and reconfigure robot morphologies. To test the performance of real EMERGE modules, 30 different morphologies are evolved in simulation, transferred to reality, and tested 10 times. Manual assembly of these morphologies is aided by a visual guiding tool that uses AprilTag markers to check the real modules positions in the morphology against their simulated counterparts and provides a color feedback. Assembly time takes under 5 min for robots with fewer than 10 modules and increases linearly with the number of modules in the morphology. Tests show that real EMERGE morphologies can reproduce the performance of their simulated counterparts, considering the reality gap. Results also show that magnetic connectors allow modules to disconnect in case of being subjected to high external torques that could damage them otherwise. Module tracking combined with their easy assembly and disassembly feature enable EMERGE modules to be also reconfigured using an external robotic manipulator. Experiments demonstrate that it is possible to attach and detach modules from a morphology, as well as release the module from the manipulator using a passive magnetic gripper. This shows that running a completely autonomous, evolution of morphology in full cycle reconfigurable hardware of different topologies for robots is possible and on the verge of being realized. We discuss EMERGE features and the trade-off between reusability and morphological variability among different approaches to physically implement evolved robots.
Benjamin Mauzé, Guillaume J. Laurent, , Cédric Clévy
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.706070

Abstract:
Parallel Continuum Robots (PCR) have several advantages over classical articulated robots, notably a large workspace, miniaturization capabilities and safe human-robot interactions. However, their low accuracy is still a serious drawback. Indeed, several conditions have to be met for PCR to reach a high accuracy, namely: a repeatable mechanical structure, a correct kinematic model, and a proper estimation of the model’s parameters. In this article, we propose a methodology that allows reaching a micrometer accuracy with a PCR. This approach emphasizes the importance of using a repeatable continuum mechanism, identifying the most influential parameters of an accurate kinematic model of the robot and precisely measuring them. The experimental results show that the proposed approach allows to reach an accuracy of 3.3 µm in position and 0.5 mrad in orientation over a 10 mm long circular path. These results push the current limits of PCR accuracy and make them good potential candidates for high accuracy automatic positioning tasks.
, Kanty Rabenorosoa, Morvan Ouisse
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.678486

Abstract:
Shape memory alloys (SMAs) are a group of metallic alloys capable of sustaining large inelastic strains that can be recovered when subjected to a specific process between two distinct phases. Regarding their unique and outstanding properties, SMAs have drawn considerable attention in various domains and recently became appropriate candidates for origami robots, that require bi-directional rotational motion actuation with limited operational space. However, longitudinal motion-driven actuators are frequently investigated and commonly mentioned, whereas studies in SMA-based rotational motion actuation is still very limited in the literature. This work provides a review of different research efforts related to SMA-based actuators for bi-directional rotational motion (BRM), thus provides a survey and classification of current approaches and design tools that can be applied to origami robots in order to achieve shape-changing. For this purpose, analytical tools for description of actuator behaviour are presented, followed by characterisation and performance prediction. Afterward, the actuators’ design methods, sensing, and controlling strategies are discussed. Finally, open challenges are discussed.
, George P. Jenkinson,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.672315

Abstract:
Soft tactile sensors are an attractive solution when robotic systems must interact with delicate objects in unstructured and obscured environments, such as most medical robotics applications. The soft nature of such a system increases both comfort and safety, while the addition of simultaneous soft active actuation provides additional features and can also improve the sensing range. This paper presents the development of a compact soft tactile sensor which is able to measure the profile of objects and, through an integrated pneumatic system, actuate and change the effective stiffness of its tactile contact surface. We report experimental results which demonstrate the sensor’s ability to detect lumps on the surface of objects or embedded within a silicone matrix. These results show the potential of this approach as a versatile method of tactile sensing with potential application in medical diagnosis.
Thomas George Thuruthel, Egidio Falotico, Lucia Beccai, Fumiya Iida
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.726774

Abstract:
Editorial on the Research TopicMachine Learning Techniques for Soft Robots Soft robotic technologies have introduced new paradigms in the design and development of robots. This shift in outlook presents new challenges and opportunities for modeling, control, and design of these robots. Traditional techniques based on analytical models have proven to be insufficient to tackle these new challenges. This is because of their highly nonlinear, time-varying and high-dimensional characteristics coupled with an immense diversity in their design. Machine learning-based approaches provide a promising alternative to traditional analytical approaches. Learning-based approaches have proven to be a valuable tool for control, data-processing, and design optimization of nonlinear systems in traditional robotics and other scientific disciplines. However, their usage has been largely limited and unexplored in soft robotics, in spite of their potential value. This research topic was initiated to investigate and advance new learning-based approaches for modeling, control, and design of soft robots. The eight articles present novel approaches for modeling, sensing and design optimization. When it comes to modeling soft robotic systems it can be clearly seen that the field is turning toward recurrent neural networks (RNNs) (Tariverdi et al., 2021; Tsompanas et al., 2021) or hybrid approaches (Johnson et al., 2021). In order to capture the temporal nonlinearities in a soft robotic system, it is vital to use learning architectures that have dynamic properties. This was demonstrated for real-time dynamic modeling of a soft continuum manipulator in Tariverdi et al. (2021) and for characterization of a Microbial Fuel Cell in Tsompanas et al. (2021). Hybrid models combining physics-based analytical models and deep learning also promises to be an alternate approach, especially when data is scarce as shown in Johnson et al. (2021). A significant portion of the submitted works focus on sensing and state estimation for soft robots, a topic which has wider applications in wearables and biomedical fields. De Barrie et al. (2021) presented a learning based framework for contact force prediction and stress distribution in real-time using deep learning and FEA models. Such techniques are powerful tools to reduce computational complexity without compromising on accuracy. Hofer et al. (2021) presented a vision-based sensing approach for state estimation in soft robots using convolutional neural networks. Khin et al. (2021) developed grip-state estimation networks for feedback control of sensorized soft robotic hands. Finally, Raffin et al. (2021) presented ensemble networks to detect and handle sensor failures. Design optimization is one of the biggest challenge in soft robotics due to the computational complexity of parametrized analytical models. Raeisinezhad et al. (2021) presented a deep reinforcement learning (DRL) algorithm for design optimization of a pneumatic soft robotic actuator on a simulated model. The key idea here being that DRL methods would be more sample efficient than traditional optimization methods with additional capabilities. In conclusion, learning-based techniques hold a strong potential for addressing the major challenges in soft robotics, be it design, modeling, sensing or control. Different learning architectures (recurrent neural networks, deep neural networks, hybrid networks) and approaches (supervised/unsupervized learning, reinforcement learning) have to be adopted based on the problem and application. Finally, machine learning can become a vital tool when used in soft sensing. In particular to obtain repeatable and reliable data in spite of non-linearities of constituting materials and/or sensor failures. All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication. This work was supported by the SHERO project, a Future and Emerging Technologies (FET) program of the European Commission (grant agreement ID 828818) and the European Union’s Horizon 2020 FET-Open program under grant agreement no. 863212 (PROBOSCIS project). The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling editor declared a shared affiliation with one of the authors EF. De Barrie, D., Pandya, M., Pandya, H., Hanheide, M., and Elgeneidy, K. (2021). A Deep Learning Method for Vision Based Force Prediction of a Soft Fin ray Gripper Using Simulation Data. Front. Robotics AI 8, 104. doi:10.3389/frobt.2021.631371 CrossRef Full Text | Google Scholar Hofer, M., Sferrazza, C., and D'Andrea, R. (2021). A Vision-Based Sensing Approach for a Spherical Soft Robotic Arm. Front. Robot AI 8, 630935. doi:10.3389/frobt.2021.630935 PubMed Abstract | CrossRef Full Text | Google Scholar Johnson, C. C., Quackenbush, T., Sorensen, T., Wingate, D., and Killpack, M. D. (2021). Using First Principles for Deep Learning and Model-Based Control of Soft Robots. Front. Robot AI 8, 654398. doi:10.3389/frobt.2021.654398 PubMed Abstract | CrossRef Full Text | Google Scholar Khin, P. M., Low, J. H., Ang, M. H., and Yeow, C. H. (2021). Development and Grasp Stability Estimation of Sensorized Soft Robotic Hand. Front. Robot AI 8, 619390. doi:10.3389/frobt.2021.619390 PubMed Abstract | CrossRef Full Text | Google Scholar Raeisinezhad, M., Pagliocca, N., Koohbor, B., and Trkov, M. (2021). Design Optimization of a Pneumatic Soft Robotic Actuator Using Model-Based Optimization and Deep Reinforcement Learning. Front. Robot AI 8, 639102. doi:10.3389/frobt.2021.639102 PubMed Abstract | CrossRef Full Text | Google Scholar Raffin, A., Deutschmann, B., and Stulp, F. (2021). Fault-tolerant Six-Dof Pose Estimation for Tendon-Driven Continuum Mechanisms. Front. Robot AI 8, 619238. doi:10.3389/frobt.2021.619238 PubMed Abstract | CrossRef Full Text | Google Scholar...
Siavash Sharifi, Caleb Rux, Nathaniel Sparling, Guangchao Wan, Amir Mohammadi Nasab, Arpith Siddaiah, Pradeep Menezes, Teng Zhang, Wanliang Shan
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.691789

Abstract:
Currently soft robots primarily rely on pneumatics and geometrical asymmetry to achieve locomotion, which limits their working range, versatility, and other untethered functionalities. In this paper, we introduce a novel approach to achieve locomotion for soft robots through dynamically tunable friction to address these challenges, which is achieved by subsurface stiffness modulation (SSM) of a stimuli-responsive component within composite structures. To demonstrate this, we design and fabricate an elastomeric pad made of polydimethylsiloxane (PDMS), which is embedded with a spiral channel filled with a low melting point alloy (LMPA). Once the LMPA strip is melted upon Joule heating, the compliance of the composite structure increases and the friction between the composite surface and the opposing surface increases. A series of experiments and finite element analysis (FEA) have been performed to characterize the frictional behavior of these composite pads and elucidate the underlying physics dominating the tunable friction. We also demonstrate that when these composite structures are properly integrated into soft crawling robots inspired by inchworms and earthworms, the differences in friction of the two ends of these robots through SSM can potentially be used to generate translational locomotion for untethered crawling robots.
, Patrik Jonell, Dimosthenis Kontogiorgos, Kenneth Funes Mora, Jean-Marc Odobez, Joakim Gustafson
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.555913

Abstract:
Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.
Fabian Winter, Tobias Wilken, Martin Bammerlin, Julia Shawarba, Christian Dorfer,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.695363

Abstract:
Objectives: We recently introduced a navigated, robot-driven laser beam craniotomy for use with stereoelectroencephalography (SEEG) applications. This method was intended to substitute the hand-held electric power drill in an ex vivo study. The purpose of this in vivo non-recovery pilot study was to acquire data for the depth control unit of this laser device, to test the feasibility of cutting bone channels, and to assess dura perforation and possible cortex damage related to cold ablation. Methods: Multiple holes suitable for SEEG bone channels were planned for the superior portion of two pig craniums using surgical planning software and a frameless, navigated technique. The trajectories were planned to avoid cortical blood vessels using magnetic resonance angiography. Each trajectory was converted into a series of circular paths to cut bone channels. The cutting strategy for each hole involved two modes: a remaining bone thickness mode and a cut through mode (CTR). The remaining bone thickness mode is an automatic coarse approach where the cutting depth is measured in real time using optical coherence tomography (OCT). In this mode, a pre-set measurement, in mm, of the remaining bone is left over by automatically comparing the bone thickness from computed tomography with the OCT depth. In the CTR mode, the cut through at lower cutting energies is managed by observing the cutting site with real-time video. Results: Both anesthesia protocols did not show any irregularities. In total, 19 bone channels were cut in both specimens. All channels were executed according to the planned cutting strategy using the frameless navigation of the robot-driven laser device. The dura showed minor damage after one laser beam and severe damage after two and three laser beams. The cortex was not damaged. As soon as the cut through was obtained, we observed that moderate cerebrospinal fluid leakage impeded the cutting efficiency and interfered with the visualization for depth control. The coaxial camera showed a live video feed in which cut through of the bone could be identified in 84%. Conclusion: Inflowing cerebrospinal fluid disturbed OCT signals, and, therefore, the current CTR method could not be reliably applied. Video imaging is a candidate for observing a successful cut through. OCT and video imaging may be used for depth control to implement an updated SEEG bone channel cutting strategy in the future.
, Elif Tutku Tunalı, Cansu Oranç, Tilbe Göksun, Aylin C. Küntay
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.679893

Abstract:
This study used an online second language (L2) vocabulary lesson to evaluate whether the physical body (i.e., embodiment) of a robot tutor has an impact on how the learner learns from the robot. In addition, we tested how individual differences in attitudes toward robots, first impressions of the robot, anxiety in learning L2, and personality traits may be related to L2 vocabulary learning. One hundred Turkish-speaking young adults were taught eight English words in a one-on-one Zoom session either with a NAO robot tutor (N = 50) or with a voice-only tutor (N = 50). The findings showed that participants learned the vocabulary equally well from the robot and voice tutors, indicating that the physical embodiment of the robot did not change learning gains in a short vocabulary lesson. Further, negative attitudes toward robots had negative effects on learning for participants in the robot tutor condition, but first impressions did not predict vocabulary learning in either of the two conditions. L2 anxiety, on the other hand, negatively predicted learning outcomes in both conditions. We also report that attitudes toward robots and the impressions of the robot tutor remained unchanged before and after the lesson. As one of the first to examine the effectiveness of robots as an online lecturer, this study presents an example of comparable learning outcomes regardless of physical embodiment.
Michael S. Lee, Henny Admoni, Reid Simmons
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.693050

Abstract:
As robots continue to acquire useful skills, their ability to teach their expertise will provide humans the two-fold benefit of learning from robots and collaborating fluently with them. For example, robot tutors could teach handwriting to individual students and delivery robots could convey their navigation conventions to better coordinate with nearby human workers. Because humans naturally communicate their behaviors through selective demonstrations, and comprehend others’ through reasoning that resembles inverse reinforcement learning (IRL), we propose a method of teaching humans based on demonstrations that are informative for IRL. But unlike prior work that optimizes solely for IRL, this paper incorporates various human teaching strategies (e.g. scaffolding, simplicity, pattern discovery, and testing) to better accommodate human learners. We assess our method with user studies and find that our measure of test difficulty corresponds well with human performance and confidence, and also find that favoring simplicity and pattern discovery increases human performance on difficult tests. However, we did not find a strong effect for our method of scaffolding, revealing shortcomings that indicate clear directions for future work.
Helena Webb, Morgan Dumitru, Anouk van Maris, Katie Winkle, , Alan Winfield
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.644336

Abstract:
The development of responsible robotics requires paying attention to responsibility within the research process in addition to responsibility as the outcome of research. This paper describes the preparation and application of a novel method to explore hazardous human-robot interactions. The Virtual Witness Testimony role-play interview is an approach that enables participants to engage with scenarios in which a human being comes to physical harm whilst a robot is present and may have had a malfunction. Participants decide what actions they would take in the scenario and are encouraged to provide their observations and speculations on what happened. Data collection takes place online, a format that provides convenience as well as a safe space for participants to role play a hazardous encounter with minimal risk of suffering discomfort or distress. We provide a detailed account of how our initial set of Virtual Witness Testimony role-play interviews were conducted and describe the ways in which it proved to be an efficient approach that generated useful findings, and upheld our project commitments to Responsible Research and Innovation. We argue that the Virtual Witness Testimony role-play interview is a flexible and fruitful method that can be adapted to benefit research in human robot interaction and advance responsibility in robotics.
, Kazuhiro Tamura, Mihoko Otake-Matsuura
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.644964

Abstract:
As the elderly population grows worldwide, living a healthy and full life as an older adult is becoming a topic of great interest. One key factor and severe challenge to maintaining quality of life in older adults is cognitive decline. Assistive robots for helping older adults have been proposed to solve issues such as social isolation and dependent living. Only a few studies have reported the positive effects of dialogue robots on cognitive function but conversation is being discussed as a promising intervention that includes various cognitive tasks. Existing dialogue robot-related studies have reported on placing dialogue robots in elderly homes and allowing them to interact with residents. However, it is difficult to reproduce these experiments since the participants’ characteristics influence experimental conditions, especially at home. Besides, most dialogue systems are not designed to set experimental conditions without on-site support. This study proposes a novel design method that uses a dialogue-based robot system for cognitive training at home. We define challenges and requirements to meet them to realize cognitive function training through daily communication. Those requirements are designed to satisfy detailed conditions such as duration of dialogue, frequency, and starting time without on-site support. Our system displays photos and gives original stories to provide contexts for dialogue that help the robot maintain a conversation for each story. Then the system schedules dialogue sessions along with the participant’s plan. The robot moderates the user to ask a question and then responds to the question by changing its facial expression. This question-answering procedure continued for a specific duration (4 min). To verify our design method’s effectiveness and implementation, we conducted three user studies by recruiting 35 elderly participants. We performed prototype-, laboratory-, and home-based experiments. Through these experiments, we evaluated current datasets, user experience, and feasibility for home use. We report on and discuss the older adults’ attitudes toward the robot and the number of turns during dialogues. We also classify the types of utterances and identify user needs. Herein, we outline the findings of this study, outlining the system’s essential characteristics to experiment toward daily cognitive training and explain further feature requests.
, David Howard
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.684304

Abstract:
Multi-level evolution (MLE) is a novel robotic design paradigm which decomposes the design problem into layered sub-tasks that involve concurrent search for appropriate materials, component geometry and overall morphology. This has a number of advantages, mainly in terms of quality and scalability. In this paper, we present a hierarchical approach to robotic design based on the MLE architecture. The design problem involves finding a robotic design which can be used to perform a specific locomotion task. At the materials layer, we put together a simple collection of materials which are represented by combinations of mechanical properties such as friction and restitution. At the components layer we combine these materials with geometric design to form robot limbs. Finally, at the robot layer we introduce these evolved limbs into robotic body-plans and learn control policies to form complete robots. Quality-diversity algorithms at each level allow for the discovery of a wide variety of reusable elements. The results strongly support the initial claims for the benefits of MLE, allowing for the discovery of designs that would otherwise be difficult to achieve with conventional design paradigms.
, Luca Lach, Matthias Plappert, Timo Korthals, Robert Haschke, Helge Ritter
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.538773

Abstract:
Deep Reinforcement Learning techniques demonstrate advances in the domain of robotics. One of the limiting factors is a large number of interaction samples usually required for training in simulated and real-world environments. In this work, we demonstrate for a set of simulated dexterous in-hand object manipulation tasks that tactile information can substantially increase sample efficiency for training (by up to more than threefold). We also observe an improvement in performance (up to 46%) after adding tactile information. To examine the role of tactile-sensor parameters in these improvements, we included experiments with varied sensor-measurement accuracy (ground truth continuous values, noisy continuous values, Boolean values), and varied spatial resolution of the tactile sensors (927 sensors, 92 sensors, and 16 pooled sensor areas in the hand). To facilitate further studies and comparisons, we make these touch-sensor extensions available as a part of the OpenAI Gym Shadow-Dexterous-Hand robotics environments.
Andrew Isbister, Nicola Y. Bailey,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.667205

Abstract:
Continuum robots are a type of robotic device that are characterized by their flexibility and dexterity, thus making them ideal for an active endoscope. Instead of articulated joints they have flexible backbones that can be manipulated remotely, usually through tendons secured onto structures attached to the backbone. This structure makes them lightweight and ideal to be miniaturized for endoscopic applications. However, their flexibility poses technical challenges in the modeling and control of these devices, especially when closed-loop control is needed, as is the case in medical applications. There are two main approaches in the modeling of continuum robots, the first is to theoretically model the behavior of the backbone and the interaction with the tendons, while the second is to collect experimental observations and retrospectively apply a model that can approximate their apparent behavior. Both approaches are affected by the complexity of continuum robots through either model accuracy/computational time (theoretical method) or missing complex system interactions and lacking expandability (experimental method). In this work, theoretical and experimental descriptions of an endoscopic continuum robot are merged. A simplified yet representative mathematical model of a continuum robot is developed, in which the backbone model is based on Cosserat rod theory and is coupled to the tendon tensions. A robust numerical technique is formulated that has low computational costs. A bespoke experimental facility with precise automated motion of the backbone via the precise control of tendon tension, leads to a robust and detailed description of the system behavior provided through a contactless sensor. The resulting facility achieves a real-world mean positioning error of 3.95% of the backbone length for the examined range of tendon tensions which performs favourably to existing approaches. Moreover, it incorporates hysteresis behavior that could not be predicted by the theoretical modeling alone, reinforcing the benefits of the hybrid approach. The proposed workflow is theoretically grounded and experimentally validated allowing precise prediction of the continuum robot behavior, adhering to realistic observations. Based on this accurate estimation and the fact it is geometrically agnostic enables the proposed model to be scaled for various robotic endoscopes.
, Michael Falk,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.662036

Abstract:
Artificial intelligence has a rich history in literature; fiction has shaped how we view artificial agents and their capacities in the real world. This paper looks at embodied examples of human-machine co-creation from the literature of the Long 18th Century (1,650–1,850), examining how older depictions of creative machines could inform and inspire modern day research. The works are analyzed from the perspective of design fiction with special focus on the embodiment of the systems and the creativity exhibited by them. We find that the chosen examples highlight the importance of recognizing the environment as a major factor in human-machine co-creative processes and that some of the works seem to precede current examples of artificial systems reaching into our everyday lives. The examples present embodied interaction in a positive, creativity-oriented way, but also highlight ethical risks of human-machine co-creativity. Modern day perceptions of artificial systems and creativity can be limited to some extent by the technologies available; fictitious examples from centuries past allow us to examine such limitations using a Design Fiction approach. We conclude by deriving four guidelines for future research from our fictional examples: 1) explore unlikely embodiments; 2) think of situations, not systems; 3) be aware of the disjunction between action and appearance; and 4) consider the system as a situated moral agent.
Julia Geerts, Jan de Wit,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.657291

Abstract:
Brainstorming is a creative technique used to support productivity and creativity during the idea generation phase of an innovation process. In professional practice, a facilitator structures, regulates, and motivates those behaviors of participants that help maintain productivity and creativity during a brainstorm. Emerging technologies, such as social robots, are being developed to support or even automate the facilitator’s role. However, little is known about whether and how brainstorming with a social robot influences productivity. To take a first look, we conducted a between-subjects experiment (N = 54) that explored 1) whether brainstorming with a Wizard-of-Oz operated robot facilitator, compared to with a human facilitator, influences productivity; and 2) whether any effects on productivity might be explained by the robot’s negative effects on social anxiety and evaluation apprehension. The results showed no evidence for an effect of brainstorming with a teleoperated robot facilitator, compared to brainstorming directly with a human facilitator, on productivity. Although the results did suggest that overall, social anxiety caused evaluation apprehension, and evaluation apprehension negatively affected productivity, there was no effect of brainstorming with a robot facilitator on this relationship. Herewith, the present study contributes to an emerging body of work on the efficacy and mechanisms of the facilitation of creative work by social robots.
Balazs P. Vagvolgyi, Mikhail Khrenov, Jonathan Cope, Anton Deguet, Peter Kazanzides, Sajid Manzoor, Russell H. Taylor,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.612964

Abstract:
Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.
Aylar Akbari, Faezeh Haghverd,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.612331

Abstract:
During the COVID-19 pandemic, the higher susceptibility of post-stroke patients to infection calls for extra safety precautions. Despite the imposed restrictions, early neurorehabilitation cannot be postponed due to its paramount importance for improving motor and functional recovery chances. Utilizing accessible state-of-the-art technologies, home-based rehabilitation devices are proposed as a sustainable solution in the current crisis. In this paper, a comprehensive review on developed home-based rehabilitation technologies of the last 10 years (2011–2020), categorizing them into upper and lower limb devices and considering both commercialized and state-of-the-art realms. Mechatronic, control, and software aspects of the system are discussed to provide a classified roadmap for home-based systems development. Subsequently, a conceptual framework on the development of smart and intelligent community-based home rehabilitation systems based on novel mechatronic technologies is proposed. In this framework, each rehabilitation device acts as an agent in the network, using the internet of things (IoT) technologies, which facilitates learning from the recorded data of the other agents, as well as the tele-supervision of the treatment by an expert. The presented design paradigm based on the above-mentioned leading technologies could lead to the development of promising home rehabilitation systems, which encourage stroke survivors to engage in under-supervised or unsupervised therapeutic activities.
Wei Yin, Hanjin Wen, Zhengtong Ning, Jian Ye, Zhiqiang Dong,
Published: 22 June 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.626989

Abstract:
Reliable and robust fruit-detection algorithms in nonstructural environments are essential for the efficient use of harvesting robots. The pose of fruits is crucial to guide robots to approach target fruits for collision-free picking. To achieve accurate picking, this study investigates an approach to detect fruit and estimate its pose. First, the state-of-the-art mask region convolutional neural network (Mask R-CNN) is deployed to segment binocular images to output the mask image of the target fruit. Next, a grape point cloud extracted from the images was filtered and denoised to obtain an accurate grape point cloud. Finally, the accurate grape point cloud was used with the RANSAC algorithm for grape cylinder model fitting, and the axis of the cylinder model was used to estimate the pose of the grape. A dataset was acquired in a vineyard to evaluate the performance of the proposed approach in a nonstructural environment. The fruit detection results of 210 test images show that the average precision, recall, and intersection over union (IOU) are 89.53, 95.33, and 82.00%, respectively. The detection and point cloud segmentation for each grape took approximately 1.7 s. The demonstrated performance of the developed method indicates that it can be applied to grape-harvesting robots.
, Kelly Merckaert, Bram Vanderborght, Marco M. Nicotra
Published: 22 June 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.663809

Abstract:
This article provides a theory for provably safe and computationally efficient distributed constrained control, and describes an application to a swarm of nano-quadrotors with limited on-board hardware and subject to multiple state and input constraints. We provide a formal extension of the explicit reference governor framework to address the case of distributed systems. The efficacy, robustness, and scalability of the proposed theory is demonstrated by an extensive experimental validation campaign and a comparative simulation study on single and multiple nano-quadrotors. The control strategy is implemented in real-time on-board palm-sized unmanned erial vehicles, and achieves safe swarm coordination without relying on any offline trajectory computations.
Chen-Lung Lu, Zi-Yan Liu, Jui-Te Huang, Ching-I Huang, Bo-Hui Wang, Yi Chen, Nien-Hsin Wu, , Laura Giarré, Pei-Yi Kuo
Published: 22 June 2021
by 10.3389
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.654132

Abstract:
Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.
Noel Cortés-Pérez,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.542717

Abstract:
A mirror-based active system capable of changing the view’s direction of a pre-existing fixed camera is presented. The aim of this research work is to extend the perceptual tracking capabilities of an underwater robot without altering its structure. The ability to control the view’s direction allows the robot to explore its entire surroundings without any actual displacement, which can be useful for more effective motion planning and for different navigation strategies, such as object tracking and/or obstacle evasion, which are of great importance for natural preservation in environments as complex and fragile as coral reefs. Active vision systems based on mirrors had been used mainly in terrestrial platforms to capture the motion of fast projectiles using high-speed cameras of considerable size and weight, but they had not been used on underwater platforms. In this sense, our approach incorporates a lightweight design adapted to an underwater robot using affordable and easy-access technology (i.e., 3D printing). Our active system consists of two arranged mirrors, one of which remains static in front of the robot’s camera, while the orientation of the second mirror is controlled by two servomotors. Object tracking is performed by using only the pixels contained on the homography of a defined area in the active mirror. HSV color space is used to reduce lighting change effects. Since color and geometry information of the tracking object are previously known, a window filter is applied over the H-channel for color blobs detection, then, noise is filtered and the object’s centroid is estimated. If the object is lost, a Kalman filter is applied to predict its position. Finally, with this information, an image PD controller computes the servomotor articular values. We have carried out experiments in real environments, testing our active vision system in an object-tracking application where an artificial object is manually displaced on the periphery of the robot and the mirror system is automatically reconfigured to keep such object focused by the camera, having satisfactory results in real time for detecting objects of low complexity and in poor lighting conditions.
Ji Chen, Jon Hochstein, Christina Kim, Luke Tucker, Lauren E. Hammel, Diane L. Damiano,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.702137

Abstract:
Gait training via a wearable device in children with cerebral palsy (CP) offers the potential to increase therapy dosage and intensity compared to current approaches. Here, we report the design and characterization of a pediatric knee exoskeleton (P.REX) with a microcontroller based multi-layered closed loop control system to provide individualized control capability. Exoskeleton performance was evaluated through benchtop and human subject testing. Step response tests show the averaged 90% rise was 26 ± 0.2 ms for 5 Nm, 22 ± 0.2 ms for 10 Nm, 32 ± 0.4 ms for 15 Nm. Torque bandwidth of P.REX was 12 Hz and output impedance was less than 1.8 Nm with control on (Zero mode). Three different control strategies can be deployed to apply assistance to knee extension: state-based assistance, impedance-based trajectory tracking, and real-time adaptive control. One participant with typical development (TD) and one participant with crouch gait from CP were recruited to evaluate P.REX in overground walking tests. Data from the participant with TD were used to validate control system performance. Kinematic and kinetic data were collected by motion capture and compared to exoskeleton on-board sensors to evaluate control system performance with results demonstrating that the control system functioned as intended. The data from the participant with CP are part of a larger ongoing study. Results for this participant compare walking with P.REX in two control modes: a state-based approach that provided constant knee extension assistance during early stance, mid-stance and late swing (Est+Mst+Lsw mode) and an Adaptive mode providing knee extension assistance proportional to estimated knee moment during stance. Both were well tolerated and significantly improved knee extension compared to walking without extension assistance (Zero mode). There was less reduction in gait speed during use of the adaptive controller, suggesting that it may be more intuitive than state-based constant assistance for this individual. Future work will investigate the effects of exoskeleton assistance during overground gait training in children with neurological disorders and will aim to identify the optimal individualized control strategy for exoskeleton prescription.
Serena Marchesi, Francesco Bossi, Davide Ghiglino, Davide De Tommaso,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.653537

Abstract:
The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.
Murphy Wonsick, Philip Long, Aykut Özgün Önol, Maozhen Wang,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.550644

Abstract:
Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA’s) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces.
, Zhou Hao, Yang Gao
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.652681

Abstract:
The increased complexity of the tasks that on-orbit robots have to undertake has led to an increased need for manipulation dexterity. Space robots can become more dexterous by adopting grasping and manipulation methodologies and algorithms from terrestrial robots. In this paper, we present a novel methodology for evaluating the stability of a robotic grasp that captures a piece of space debris, a spent rocket stage. We calculate the Intrinsic Stiffness Matrix of a 2-fingered grasp on the surface of an Apogee Kick Motor nozzle and create a stability metric that is a function of the local contact curvature, material properties, applied force, and target mass. We evaluate the efficacy of the stability metric in a simulation and two real robot experiments. The subject of all experiments is a chasing robot that needs to capture a target AKM and pull it back towards the chaser body. In the V-REP simulator, we evaluate four grasping points on three AKM models, over three pulling profiles, using three physics engines. We also use a real robotic testbed with the capability of emulating an approaching robot and a weightless AKM target to evaluate our method over 11 grasps and three pulling profiles. Finally, we perform a sensitivity analysis to demonstrate how a variation on the grasping parameters affects grasp stability. The results of all experiments suggest that the grasp can be stable under slow pulling profiles, with successful pulling for all targets. The presented work offers an alternative way of capturing orbital targets and a novel example of how terrestrial robotic grasping methodologies could be extended to orbital activities.
, Cosmin Copot, Steve Vanlanduit
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.687031

Abstract:
Safety is an important issue in human–robot interaction (HRI) applications. Various research works have focused on different levels of safety in HRI. If a human/obstacle is detected, a repulsive action can be taken to avoid the collision. Common repulsive actions include distance methods, potential field methods, and safety field methods. Approaches based on machine learning are less explored regarding the selection of the repulsive action. Few research works focus on the uncertainty of the data-based approaches and consider the efficiency of the executing task during collision avoidance. In this study, we describe a system that can avoid collision with human hands while the robot is executing an image-based visual servoing (IBVS) task. We use Monte Carlo dropout (MC dropout) to transform a deep neural network (DNN) to a Bayesian DNN, and learn the repulsive position for hand avoidance. The Bayesian DNN allows IBVS to converge faster than the opposite repulsive pose. Furthermore, it allows the robot to avoid undesired poses that the DNN cannot avoid. The experimental results show that Bayesian DNN has adequate accuracy and can generalize well on unseen data. The predictive interval coverage probability (PICP) of the predictions along x, y, and z directions are 0.84, 0.94, and 0.95, respectively. In the space which is unseen in the training data, the Bayesian DNN is also more robust than a DNN. We further implement the system on a UR10 robot, and test the robustness of the Bayesian DNN and the IBVS convergence speed. Results show that the Bayesian DNN can avoid the poses out of the reach range of the robot and it lets the IBVS task converge faster than the opposite repulsive pose. 1
, Nobuo Yamato, Masahiro Shiomi, Hiroshi Ishiguro
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.633378

Abstract:
We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.634297

Abstract:
Many analyses of the ethical, legal and societal impacts of robotics are focussed on Europe and the United States. In this article I discuss the impacts of robotics on developing nations in a connected world, and make the case that international equity demands that we extend the scope of our discussions around these impacts. Offshoring has been instrumental in the economic development of a series of nations. As technology advances and wage share increases, less labour is required to achieve the same task, and more job functions move to new areas with lower labour costs. This cascade results in a ladder of economic betterment that is footed in a succession of countries, and has improved standards of living and human flourishing. The recent international crisis precipitated by COVID-19 has underlined the vulnerability of many industries to disruptions in global supply chains. As a response to this, “onshoring” of functions which had been moved to other nations decreases risk, but would increase labour costs if it were not for automation. Robotics, by facilitating onshoring, risks pulling up the ladder, and suppressing the drivers for economic development. The roots of the economic disparities that motivate these international shifts lie in many cases in colonialism and its effects on colonised societies. As we discuss the colonial legacy, and being mindful of the justifications and rationale for distributive justice, we should consider how robotics impacts international development.
Zubair Iqbal, , Domenico Prattichizzo, Gionata Salvietti
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.644532

Abstract:
Collaborative robots promise to add flexibility to production cells thanks to the fact that they can work not only close to humans but also with humans. The possibility of a direct physical interaction between humans and robots allows to perform operations that were inconceivable with industrial robots. Collaborative soft grippers have been recently introduced to extend this possibility beyond the robot end-effector, making humans able to directly act on robotic hands. In this work, we propose to exploit collaborative grippers in a novel paradigm in which these devices can be easily attached and detached from the robot arm and used also independently from it. This is possible only with self-powered hands, that are still quite uncommon in the market. In the presented paradigm not only hands can be attached/detached to/from the robot end-effector as if they were simple tools, but they can also remain active and fully functional after detachment. This ensures all the advantages brought in by tool changers, that allow for quick and possibly automatic tool exchange at the robot end-effector, but also gives the possibility of using the hand capabilities and degrees of freedom without the need of an arm or of external power supplies. In this paper, the concept of detachable robotic grippers is introduced and demonstrated through two illustrative tasks conducted with a new tool changer designed for collaborative grippers. The novel tool changer embeds electromagnets that are used to add safety during attach/detach operations. The activation of the electromagnets is controlled through a wearable interface capable of providing tactile feedback. The usability of the system is confirmed by the evaluations of 12 users.
Jacopo Talamini, , Stefano Nichele
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.673156

Abstract:
The paradigm of voxel-based soft robots has allowed to shift the complexity from the control algorithm to the robot morphology itself. The bodies of voxel-based soft robots are extremely versatile and more adaptable than the one of traditional robots, since they consist of many simple components that can be freely assembled. Nonetheless, it is still not clear which are the factors responsible for the adaptability of the morphology, which we define as the ability to cope with tasks requiring different skills. In this work, we propose a task-agnostic approach for automatically designing adaptable soft robotic morphologies in simulation, based on the concept of criticality. Criticality is a property belonging to dynamical systems close to a phase transition between the ordered and the chaotic regime. Our hypotheses are that 1) morphologies can be optimized for exhibiting critical dynamics and 2) robots with those morphologies are not worse, on a set of different tasks, than robots with handcrafted morphologies. We introduce a measure of criticality in the context of voxel-based soft robots which is based on the concept of avalanche analysis, often used to assess criticality in biological and artificial neural networks. We let the robot morphologies evolve toward criticality by measuring how close is their avalanche distribution to a power law distribution. We then validate the impact of this approach on the actual adaptability by measuring the resulting robots performance on three different tasks designed to require different skills. The validation results confirm that criticality is indeed a good indicator for the adaptability of a soft robotic morphology, and therefore a promising approach for guiding the design of more adaptive voxel-based soft robots.
Alla Gubenko, Christiane Kirsch, Jan Nicola Smilek, Todd Lubart,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.662030

Abstract:
There is a growing literature concerning robotics and creativity. Although some authors claim that robotics in classrooms may be a promising new tool to address the creativity crisis in school, we often face a lack of theoretical development of the concept of creativity and the mechanisms involved. In this article, we will first provide an overview of existing research using educational robotics to foster creativity. We show that in this line of work the exact mechanisms promoted by robotics activities are rarely discussed. We use a confluence model of creativity to account for the positive effect of designing and coding robots on students' creative output. We focus on the cognitive components of the process of constructing and programming robots within the context of existing models of creative cognition. We address as well the question of the role of meta-reasoning and emergent strategies in the creative process. Then, in the second part of the article, we discuss how the notion of creativity applies to robots themselves in terms of the creative processes that can be embodied in these artificial agents. Ultimately, we argue that considering how robots and humans deal with novelty and solve open-ended tasks could help us to understand better some aspects of the essence of creativity.
, Maurizio Balistreri, Marianna Capasso, Steven Umbrello, Federica Merenda
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.654298

Abstract:
Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens.
Nazerke Rakhymbayeva, Aida Amirova,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.669972

Abstract:
Social robots are increasingly being used as a mediator between a therapist and a child in autism therapy studies. In this context, most behavioural interventions are typically short-term in nature. This paper describes a long-term study that was conducted with 11 children diagnosed with either Autism Spectrum Disorder (ASD) or ASD in co-occurrence with Attention Deficit Hyperactivity Disorder (ADHD). It uses a quantitative analysis based on behavioural measures, including engagement, valence, and eye gaze duration. Each child interacted with a robot on several occasions in which each therapy session was customized to a child’s reaction to robot behaviours. This paper presents a set of robot behaviours that were implemented with the goal to offer a variety of activities to be suitable for diverse forms of autism. Therefore, each child experienced an individualized robot-assisted therapy that was tailored according to the therapist’s knowledge and judgement. The statistical analyses showed that the proposed therapy managed to sustain children’s engagement. In addition, sessions containing familiar activities kept children more engaged compared to those sessions containing unfamiliar activities. The results of the interviews with parents and therapists are discussed in terms of therapy recommendations. The paper concludes with some reflections on the current study as well as suggestions for future studies.
Milad Shafiee Ashtiani, Alborz Aghamaleki Sarvestani,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.645748

Abstract:
Animals locomote robustly and agile, albeit significant sensorimotor delays of their nervous system and the harsh loading conditions resulting from repeated, high-frequent impacts. The engineered sensorimotor control in legged robots is implemented with high control frequencies, often in the kilohertz range. Consequently, robot sensors and actuators can be polled within a few milliseconds. However, especially at harsh impacts with unknown touch-down timing, controllers of legged robots can become unstable, while animals are seemingly not affected. We examine this discrepancy and suggest and implement a hybrid system consisting of a parallel compliant leg joint with varying amounts of passive stiffness and a virtual leg length controller. We present systematic experiments both in computer simulation and robot hardware. Our system shows previously unseen robustness, in the presence of sensorimotor delays up to 60 ms, or control frequencies as low as 20 Hz, for a drop landing task from 1.3 leg lengths high and with a compliance ratio (fraction of physical stiffness of the sum of virtual and physical stiffness) of 0.7. In computer simulations, we report successful drop-landings from 3.8 leg lengths (1.2 m) for a 2 kg quadruped robot with 100 Hz control frequency and a sensorimotor delay of 35 ms.
Mário Gabriel Santos De Campos, Caroline P. C. Chanel, Corentin Chauffaut,
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.557692

Abstract:
This study describes a blockchain-based multi-unmanned aerial vehicle (multi-UAV) surveillance framework that enables UAV coordination and financial exchange between system users. The objective of the system is to allow a set of Points-Of-Interest (POI) to be surveyed by a set of autonomous UAVs that cooperate to minimize the time between successive visits while exhibiting unpredictable behavior to prevent external agents from learning their movements. The system can be seen as a marketplace where the UAVs are the service providers and the POIs are the service seekers. This concept is based on a blockchain embedded on the UAVs and on some nodes on the ground, which has two main functionalities. The first one is to plan the route of each UAV through an efficient and computationally cheap game-theoretic decision algorithm implemented into a smart contract. The second one is to allow financial transactions between the system and its users, where the POIs subscribe to surveillance services by buying tokens. Conversely, the system pays the UAVs in tokens for the provided services. The first benchmarking experiments show that the IOTA blockchain is a potential blockchain candidate to be integrated in the UAV embedded system and that the chosen decentralized decision-making coordination strategy is efficient enough to fill the mission requirements while being computationally light.
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.672379

Abstract:
Genetic encodings and their particular properties are known to have a strong influence on the success of evolutionary systems. However, the literature has widely focused on studying the effects that encodings have on performance, i.e., fitness-oriented studies. Notably, this anchoring of the literature to performance is limiting, considering that performance provides bounded information about the behavior of a robot system. In this paper, we investigate how genetic encodings constrain the space of robot phenotypes and robot behavior. In summary, we demonstrate how two generative encodings of different nature lead to very different robots and discuss these differences. Our principal contributions are creating awareness about robot encoding biases, demonstrating how such biases affect evolved morphological, control, and behavioral traits, and finally scrutinizing the trade-offs among different biases.
, Ian Walker, Thomas Speck
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.711942

Abstract:
Editorial on the Research Topic Generation Growbots: Materials, Mechanisms, and Biomimetic Design for Growing Robots Plants are the dominant life form on the planet, accounting for over 80% of its biomass (Thompson, 2018). Plants are adapted to and thrive in virtually all environments, both natural and human-adapted, across the globe. In achieving this widespread presence, plants exhibit a significant range of structures and operational strategies. On the one hand, many key aspects of plant biology remain imperfectly understood, and the possibilities for plant-inspired engineering remain largely unexplored. On the other hand, increasing interest in plant-inspired research can be observed in architecture and technology in general over the last decades (cf. Speck and Speck 2019). More recently, plants have also started to represent models in robotics (Mazzolai et al., 2010; Lastinger et al., 2019; Sadeghi et al., 2020; Wooten et al., 2018), especially for the design of systems that have to deal with unstructured environments and require advanced capabilities of soft interaction, adaptation, and self-morphing. With this view, the goal of this special issue is to illustrate the potential of identifying principles from plant growth and movement suitable for engineering, and the adaptation of those principles to the new emerging field of “growing” robots, or Growbots. The field of robotics has expanded rapidly over the past 25 years. Important advances in robotic design, planning, locomotion, and manipulation have been inspired and driven by insights gained from biology, notably in the structure and behavior of animals. However, to date very little attention has been paid by roboticists to the multitude of “existence proofs” provided by plants. In this Research Topic, which is based on the contributions presented at the 2019 Robotics Science and Systems (RSS) workshop “Generation GrowBots” (June 22, 2019 in Freiburg, Germany), we present a research topic of nine articles focused on the intersection of robotics and plant biology. The articles are authored by a highly interdisciplinary group of domain experts, bringing together natural scientists and engineers, including experts in material science, soft robotics, plant biology, and architecture to present new scientific discoveries on plants and technological advances relevant to continuum, soft, adaptable, and growing robots. Collectively, the articles are representative of the current state of the art in the emerging area of plant-inspired robotics. Trends, frontiers and potential applications for a variety of high-tech sectors are discussed. Under the Research Topic “Generation GrowBots” contributing authors discuss the science and technologies of the new field of plant-inspired robotics and growing robotics, exploring the materials, mechanisms and behavioral strategies as the basis of a new paradigm for robot mobility inspired by the moving-by-growing ability of plants. Plants show unique capabilities of endurance and movement by growth. Growth allows plants to strongly adapt the body morphology to different environmental conditions, and to move in search for nutrients and light or for protection from harmful agents. Because of these features, together with plant biologists and materials scientists, engineers are deeply investigating the biomechanics, materials, energy efficiency mechanisms, and behavior of a variety of plant species, to take inspiration for the design of multi-functional and adaptable technologies, and for the development of a new class of low-mass, low-volume robots endowed with new and unprecedented abilities of movement. With their capability to better challenge unstructured and extreme environments, soft, self-morphing, growing machines will have potential applications in a variety of sectors, including the exploration and monitoring of archaeological sites, unknown/challenging terrestrial or extra-terrestrial areas, as well as novel technological systems for the advancement of future urban architectures. The topics of the nine articles in the present issue on “Generation GrowBots” vary in focus, but all address the overall theme of plant-based movement and its potential adaptation to robots. Two articles (Gallentine et al.; Geer et al.) introduce new robotic structures based on curling structures in fruit awns and climbing plants. The two examples cover a huge size range. The biomimetic robotic manipulator presented by Geer et al. is inspired by the ultrastructure of the cell wall of awns showing a helical cellulose fiber arrangement which allows for humidity driven awn movement. The concepts for transfer to motile structure in robots presented by Gallentine et al. are based on the macroscopic structure and movement of liana stems and tendrils and the finding that many climbing plants use curling and/or twining of their stems or tendrils for stiffening (braided stems) or securing attachment (tendrils). They show that these systems represent interesting models for new types of climbing plant-inspired soft robots. The nature of movement in plants, and the consequent implications for plant-inspired robots, are considered by Frazier et al., and models of plant growth aimed at implementation in robots are presented by Porat et al.. These two contributions prove that for a successful transfer of motion principles and movements in plants to soft robots and other types of soft machines, a thorough analysis of these movements in plants using a combination of experimental and modeling approaches are a prerequisite. Without a basic and quantitative understanding of the form-structure-function relation of the plant organs used as concept generators for moving GrowBots the potential of plant-inspired approach cannot fully be used. Realizations of vine-inspired growing robots are described in (Blumenschein et al.), with review on recent work on robots that...
, Gregory Chance, Praminda Caleb-Solly, Sanja Dogramadzi
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.667316

Abstract:
Hazard analysis methods such as HAZOP and STPA have proven to be effective methods for assurance of system safety for years. However, the dimensionality and human factors uncertainty of many assistive robotic applications challenges the capability of these methods to provide comprehensive coverage of safety issues from interdisciplinary perspectives in a timely and cost-effective manner. Physically assistive tasks in which a range of dynamic contexts require continuous human–robot physical interaction such as e.g., robot-assisted dressing or sit-to-stand pose a new paradigm for safe design and safety analysis methodology. For these types of tasks, considerations have to be made for a range of dynamic contexts where the robot-assistance requires close and continuous physical contact with users. Current regulations mainly cover industrial collaborative robotics regarding physical human–robot interaction (pHRI) but largely neglects direct and continuous physical human contact. In this paper, we explore limitations of commonly used safety analysis techniques when applied to robot-assisted dressing scenarios. We provide a detailed analysis of the system requirements from the user perspective and consider user-bounded hazards that can compromise safety of this complex pHRI.
, Michael Panzirsch, Harsimran Singh, Andre Coelho, Ribin Balachandran, Aaron Pereira, Bernhard M. Weber, Nicolai Bechtel, Cornelia Riecke, Bernhard Brunner, et al.
Frontiers in Robotics and AI, Volume 8; doi:10.3389/frobt.2021.611251

Abstract:
Certain telerobotic applications, including telerobotics in space, pose particularly demanding challenges to both technology and humans. Traditional bilateral telemanipulation approaches often cannot be used in such applications due to technical and physical limitations such as long and varying delays, packet loss, and limited bandwidth, as well as high reliability, precision, and task duration requirements. In order to close this gap, we research model-augmented haptic telemanipulation (MATM) that uses two kinds of models: a remote model that enables shared autonomous functionality of the teleoperated robot, and a local model that aims to generate assistive augmented haptic feedback for the human operator. Several technological methods that form the backbone of the MATM approach have already been successfully demonstrated in accomplished telerobotic space missions. On this basis, we have applied our approach in more recent research to applications in the fields of orbital robotics, telesurgery, caregiving, and telenavigation. In the course of this work, we have advanced specific aspects of the approach that were of particular importance for each respective application, especially shared autonomy, and haptic augmentation. This overview paper discusses the MATM approach in detail, presents the latest research results of the various technologies encompassed within this approach, provides a retrospective of DLR's telerobotic space missions, demonstrates the broad application potential of MATM based on the aforementioned use cases, and outlines lessons learned and open challenges.
Page of 19
Articles per Page
by
Show export options
  Select all
Back to Top Top