The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems

Trust is essential in designing autonomous and semiautonomous Robots and Autonomous Systems (RAS), because of the “No trust, no use” concept. RAS should provide high quality services, with four key properties that make them trustworthy: they must be (i) robust with regards to any system health related issues, (ii) safe for any matters in their surrounding environments, (iii) secure against any threats from cyber spaces, and (iv) trusted for human-machine interaction. This article thoroughly analyses the challenges in implementing the trustworthy RAS in respects of the four properties, and addresses the power of AI in improving the trustworthiness of RAS. While we focus on the benefits that AI brings to human, we should realize the potential risks that could be caused by AI. This article introduces for the first time the set of key aspects of human-centered AI for RAS, which can serve as a cornerstone for implementing trustworthy RAS by design in the future.


INTRODUCTION
Artificial Intelligence (AI) and Machine Learning (ML) are increasingly used in robots and autonomous systems (RAS) that attempt to mimic the adaptive and smart problems solving capabilities of humans. Such systems promise a smarter and safer world where e.g. self-driving vehicles can reduce the number of road accidents, medical robots perform intricate surgeries, and -digital‖ pilots participate in crew flight-operations [1].
In addition, IoT technology could inspire wider applications of RAS. However, many RAS are currently released into the world without full prior analysis of potential inappropriate operations and thus may accomplish things, which were not foreseen by their human designers or owners. If not managed and understood properly, there is the risk that system autonomy could devalue human work or give rise to hostile attitudes towards advanced technology. Thus, developing trust in RAS right from the start is paramount on the progression towards fully autonomous solutions.
During the initial deployment of RAS, humans tended to accept untrusted products and services, but have gradually come to realise that Autonomous Systems must be trustworthy. Many lessons have shown that trust directly influences operators' use of automation. For example, an autonomous vehicle killed a person in the street in Arizona, US in 2018 [2], successful cyber-attacks have been executed to demonstrate how autonomous vehicles could potentially be hijacked [3], and the failure of an intelligent support aviation system on Boeing planes was responsible for the crash of two airplanes killing 189 and 157 people, respectively [4]. Unforeseen events can have significant negative impact on the acceptability of autonomous systems and have raised a range of increasingly urgent and complex moral questions and posed many ethical, societal and legal challenges [5].

II. KEY FACTORS THAT AFFECT RAS' TRUSTWORTHINESS
Substantial strategic efforts on the trustworthiness of RAS have been made internationally. For example, the US National Institute of Standards and Technology (NIST) provides a trustworthiness framework of cyber physical systems, which covers cyber security, privacy, safety, reliability, and resilience [6]. The following aspects of trustworthiness of RAS need to be investigated thoroughly to promote the implementation of fully trustworthy RAS.
Functionality/performance needs to be well supported by the system's autonomy for sensing, data collection and process, decision making, communication, human-machine interaction, action control and monitoring. Their logicality, performance and quality is the essential requirement of autonomous systems.
Security is a critical challenge, as today more and more aspects of work and life are becoming virtual. Cyber-attacks could directly threaten the safety of an autonomous system. This has been demonstrated by the attack experiments on the SUV Jeep in 2015 [3]. Hence, RAS should be able to detect, defend and prevent any anomalies from cyberspace. Privacy of RAS is a branch of security which focuses on the data protection, especially, personal information protection, and regulation compliance (e.g. GDPR). Security by Design and Privacy by Design are the requirements of Industry 4.0.
Safety is a persistent requirement for all kinds of autonomous systems. Different application domains of autonomous systems may have different requirements in safety. For some systems (e.g. aircraft, vehicles, infrastructure), safety is a critical requirement; safety is directly related to the reliability of RAS, which is an important factor to be concerned when a human selects a RAS. Safety should be co-designed with security, internal health and external interaction with humans and the environment.
Health, in addition to the threats from cyberspace and external environments, the reliability and the safety of RAS face challenges from potential process abnormalities and component faults. Faults in RAS can be classified as actuator faults, sensor faults, and plant faults (or called component faults or parameter faults). Hence, it is paramount to detect and identify the diversity of potential abnormalities and faults as early as possible and implement fault-tolerant operations for minimizing performance degradation and avoiding dangerous situations [7].
Human-Machine Interaction refers to the communications and interactions between human and machine via a user interface. In the NIST concept model of Cyber Physical Systems (CPS) (Fig. 1), the input of humans can be fed into the decision loop. No matter which level a system's autonomy is at, human should be able to interrupt the system, and human interaction should be built upon tangible and attentive user interface principles. Through the intuitive and efficient visualization of sensed quantities, estimated statistics, and the automatic identification of trends over time, both users and the system may be better informed, thus increasing their self-efficacy and decision making. The last two principles of Norman's seven principles for HCI design [8], "Design for Error" and "When all else fails, standardize", should be applied to ensure the system to keep the bottom line of safety and reliability.

III. THE CHALLENGES IN IMPLEMENTING TRUSTWORTHY RAS
A central function of an autonomous system is to enable it to use information that represents the state of the physical world and the cyber world, make decisions, and take actions, thus implementing its required tasks with optimal performance and quality. The diversity, uncertainty and complexity of tasks as well as its cyber and physical environments always bring a big challenge to the optimisation of RAS performance. The most challenging goal is that the performance and quality of RAS for the specific tasks in the context of a specific application domain needs to align with the four properties of a system for its trustworthiness.

A. Challenges of RAS Security
IoT is where the Internet meets the physical world. This causes clearly implications on security as the attack threat moves from manipulating information to controlling actuation (i.e. moving from the digital to the physical world) [9] [10]. Consequently, it drastically expands the attack surface from known threats and known devices in the upper layers of the IoT system stack to additional security threats of RAS, communication protocols, and workflow of RAS. Katzenbeisser et al. [11] broadly divided the security of autonomous systems into security of the platforms that constitute it, including the security of hardware and software, and the security of communication between these platforms. However, they did not consider threats to developing platforms of RAS, such as security issues concerning the supply chains of RAS [9].
Securing communication links is challenging. There are different communication channels, such as Wi-Fi, GPS, radio, Bluetooth, etc. Various attacks or crimes could intrude a RAS through these channels, for example through general Trojan-horse attacks on quantum-key-distribution systems [12], peer-to-peer attacks on the same access point, MAC spoofing, wireless hijacking, denial of services (DoS), malicious eavesdropping [13], and Key Negotiation of Bluetooth (KNOB) attacks [14].
Securing the RAS software integrity is a critical challenge. Various attacks can break the integration of RAS software, thus producing different consequences, such as code modification, malfunctions, loss of control, loss of customers' personal information, broken communication, or excessive network traffic. Malfunctions and loss of control are critical challenges, as they could produce severe consequences and could directly break the safety of RAS and the customers who are using the RAS [15].
Securing hardware is also a critical challenge. In an autonomous vehicle, many Embedded Computing Units (ECUs) could become an attack point. As an ECU has limited computing resources, it is difficult to have a comprehensive and effective security solution on it with realtime performance. Side channel attacks (SCAs) are typical attacks on embedded systems, aiming at collecting leakage data during processing and through statistical computations extract sensitive information. Moreover, an attacker could intrude the sensing system of RAS, thus producing adversarial readings to a data collection system, and causing the wrong decision.

B. Challenges to RAS Safety
As shown in the NIST concept model of CPS ( Fig. 1), almost all kinds of RAS belong to CPS, even if a fully autonomous system may not have much human intervention and may not be connected to the Internet. The safety of RAS covers two aspects: the safety threat to RAS from surrounding environments and the safety threat to surrounding environments from RAS. These two aspects of safety are closely linked with the sensors and actuators in RAS. Information from the sensing system of RAS is critical, as the physical state of the surrounding environment is fed into the decision system. To ensure the safe process or work of RAS, RAS should be able to overcome the following challenges: (1) A huge variety of working environments, which require RAS to be able to correctly sense the surrounding environment, deal with any anomalies and make real-time decisions that align with the goal of the RAS. For example, autonomous vehicles could face various anomaly situations, such as pedestrians crossing the road suddenly, an accident or road direction changes.
(2) Synchronization of RAS tasks with those of a human/robot collaborative team, avoiding any situations which could lead the robot to harm members (human or robots) in the team.
(3) Diversity and uncertainty of potential failures; efficiency in detecting such failures, providing early warning and fast response. RAS should allow human intervention at any point.

C. Challenges to RAS Health
A fault occurs when there is a difference between the realizable function and the required function. Kawabata et al. [16] categorized three types of faults: fatigue (deterioration), noise (sudden fault) and initial failure. Predictive maintenance may be more important in the RAS domain than in other domains, as an autonomous system without human intervention could be more severely damaged by a fault in the system. Diagnostics is concerned with current state of any subsystem whereas prognostic evaluation is related to the future state of a subsystem [17]. Hence, for a fully autonomous system, prognostics may be more important, and it is challengeable to get high accurate prognostic, as it depends on the usage of the system, the experience of operators as well as the working environments. A robot or an autonomous system can be a very complex system and online diagnostics and prognostics should align with the requirement of real-time performance, which is a critical challenge in RAS. Generally, self-diagnosis system for an autonomous system consists of three processes: internal condition sensing, diagnosis and coping with the faulty condition to increase the fault tolerance, plus fast responses for sudden faults [16]. Faults from sensors may lead to an incorrect decision in the system; faults from actuators may cause a wrong behavior of the system; and faults in electronic components could cause malfunction and disorder of the system.

D. Challenges of Human-RAS Interaction
Although we expect to implement fully autonomous systems, we still need to design the systems to allow human to interact with the systems in an emergency. Veloso [18] realized that it can prove difficult to interrupt a system without the appropriate pre-design for interruptions. The human-robot interaction for allowing a human to interrupt a robot is complex, as many situational features and constraints need to be considered, including task priorities, operations, interruption frequency, and timings. There could be many applications, in which, human and robots work in a collaborative team. For critical problem domains, such as defense, healthcare, and industry, RAS could be applied to replace humans to deal with dangerous, difficult or complex cases. However, as our physical environments are dynamic, non-deterministic, and partially unknown, implementing trusted Human-Robot Interaction in such a complex physical world is challenging.
Communication with humans requires socially acceptable responses and common-sense knowledge to handle a broad variety of situations with complex semantics to interpret and understand. The diversity, complexity, and uncertainty of human status brings a big challenge, with a variety of human emotions expressed differently by different subjects. Most importantly, the critical challenge lies in the inability to exhaustively test such a complex human-machine interaction system. Learning and adaptation of an autonomous system to unforeseen circumstances in a dynamic and changing world is key.

IV. OPPORTUNITIES OF AI TECHNOLOGY
The power of AI in the context of RAS has been demonstrated in various applications for different purposes. For example, recently, Zhao et al. [19] proposed a probabilistic model to verify the safety and reliability of unmanned underwater vehicles in extreme environments; Zhou and Yang [20] investigated different types of normalization in training Deep Convolutional Neural Networks for 2D biomedical semantic segmentation; Lee et al. [21] proposed an industrial AI ecosystem. It was reported that AI is also being used for trajectory and payload optimization, which are important preliminary steps to NASA's next Mars 2020 Rover mission [22].

A. AI for the Security of RAS
Automation could be the only way to level the playing field, reduce the volume of threats, and enable faster prevention. AI is the key driver for the automation of RAS' cyber security. The design of intelligent solutions for cyber security has to be resilient in the face of determined and sophisticated attackers, who may target any kind of RAS, which are usually connected to the Internet. A mechanism is needed to allow the security components to be seamlessly integrated into the architecture of RAS, which should enable security to be adaptive, self-learning and autonomous, and thus to implement -Security by Design‖, demanded by Industry 4.0 [23].
For access control, AI techniques have been applied for solving many authentication problems, such as biometrics identification (e.g. palm, iris, fingerprint and face), signature verification, keystroke pattern recognition, etc. For example, Fang et al. [24] developed new AI enabled security provisioning approaches to achieve fast authentication and progressive authorization. Attack or intrusion detection is important for the cyber security of RAS. There has been much research on machine learning for intrusion detection [25], anomaly detection [26], crawler detection [27], malware analysis [28], and human behaviour monitoring [29]. Improving detection accuracy, reducing the false alarm rate and detecting unknown attacks are ongoing goals for machine learning-based intrusion detection systems. Due to the constant development of attack techniques, adaptive Intrusion Detection Systems (IDS) are required. IDS could passively operate at network level to prevent impact on RAS. For IoRT (Internet of Robotic Things) enabled systems, the capacity of intrusion detection in edge devices is needed. Therefore, the online intrusion detection of RAS is a critical challenge. Verma and Ranga [30] investigated Machine Learning (ML) classification algorithms for securing IoT against DoS attacks. The best solution of a fast response is to empower an incident response via automation [31], which could be implemented using socio-technical model based on the effective and efficient threat intelligence, alert enrichment and a priority order of actions on RAS.
However, as discussed in [32], ML models, may be overfit to the adversarial training examples, thus leading to wrong results at test. Therefore, one of challenges of deploying AIbased techniques to security domains is to solve the overfitting problem. In real-world applications of cyber security, it is difficult to check if the collected data is wrong or not. Namely, ML techniques themselves do represent a potential solution for the automation of cyber security, but require correct (or trustworthy) features or data to be available. In addition, a ML model, as a program itself in the system, could be hacked, to produce unexpected consequences. The robustness and security of ML models need to be investigated in the system design. ML models can be evaluated with respect to different performance indicators.
Effectiveness (e.g. accuracy, F-measure, ROC curves, etc.) and efficiency (e.g. real-time) are highly required, despite computational resource being restricted for the on-board countermeasures to secure RAS.

B. AI for the Safety of RAS
System monitoring plays an important role for the safety of RAS [1]. Many robotic autonomous systems such as UAV are safety critical systems, equipped with a Safety Instrumented System (SIS) for specific control functions to fail-safe or maintain safe operations of a process when unacceptable or dangerous conditions occur. SIS for different types of autonomous systems can be implemented in different methods. One of the key tasks in SIS is the detection of anomalies through sensing of their surrounding environments. Advanced sensor technology greatly improves RAS' perception. The most frequently used sensors include laser sensors (LIDAR) [33], visual sensors [34], radar, GPS, infrared sensors [35] and ultrasonic sensors [36].
Amongst all autonomous systems, aviation requires some of the highest safety standards. An aircraft has many monitoring subsystems, such as instrument monitoring, system monitoring, and environment monitoring. Navigation is an important property of RAS, which infers the running status, adjusts the settings for flight appropriately, based on the information from all monitoring systems.
There has been much research on AI techniques, especially machine learning techniques, for environment monitoring and RAS navigation. For example, neural networks have been developed for robot path planning [37]; a linguistic decision tree was developed for classic robot routing [38]; a support vector machine based on the spacetime feature vector was developed to recognize dynamic obstacles [39]; a Deep Convolutional Neural Network (DCNN) was developed for robot navigation [40]. To improve the recognition rate of the speed signs for autonomous vehicles in dynamic environments, a Spatial Pyramid Pooling based DCNN was developed to recognize speed signs in dynamic environments based on the node in images extracted with the method of the salient target detection on the background absorbing Markov chain for autonomous vehicles [41]. A knowledge-based fuzzy control system was developed for target search behavior and path planning of mobile robots [42]. However, online machine learning training to adapt to a dynamic environment is still not entirely solved.

C. AI for the Health of RAS
To improve the reliability of a system, fault diagnosis is usually employed to monitor, locate and identify faults. The analytical redundancy techniques have become the main stream in fault diagnosis research since the 1980s [43]. Usually, fault diagnosis includes three tasks, such as fault detection, fault isolation, and fault identification. Fault detection is used to check whether for a malfunction in the system and determine the time when the fault occurred, while fault isolation is to determine the location of the faulty component, and fault identification is to determine the type, shape, and dimensions of a fault. From the technical point of view, fault diagnosis can be categorized into four types: model-based fault diagnosis, signal-based fault diagnosis, knowledge-based and hybrid methods. The idea of modelbased diagnosis is to create a model that maps the relationship between inputs and outputs, which can be represented by a machine-learning model.
For example, Hashimoto et al. [44] developed an approach to the detection and diagnosis of the hard/noise failure based on the variable structure interacting multiplemodel estimator. Signal-based methods utilize measured signals rather than explicit input-output models for fault diagnosis. They can be further divided into three types: time domain, frequency domain and time-frequency domain signal based fault diagnosis [7]. Faults in the process are reflected in the measured signals, the features are extracted, and a diagnostic decision is then made based on the symptom analysis and prior knowledge on the symptoms of the healthy systems. These extract features and the prior knowledge can be fed into an AI model to implement automatic diagnosis and fault learning.
Recently, ML techniques have been applied for datadriven fault detection (as a decision maker) or diagnosis (as a classifier). The data can be the directly measured signals or features extracted from signals or raw data from sensors. For example, Artificial Neural Networks have been developed for predicting the fault in terms of the vibration of robot joints [45], and estimating the fault torque for the adaptive actuator of a robot for robot manipulators [46]. A hybrid approach combining knowledge-based models and machine learning models might benefit the precision of fault diagnosis for robot systems. Similar to the AI for data driven IDS or anomaly detection, the three challenges in diagnosis accuracy, real-time performance, and data availability are applicable for data-driven fault diagnoses. Fault allocation is a challenge for a complex system, comprised of multiple components. AI optimisation techniques could be applicable for the fault allocation problem.
A swarm robotic system, as a large-scale distributed system, work without the intervention of humans, employing autonomous self-diagnosis, self-healing and selfreproduction if necessary. Dai et al. [47] presented a selfhealing and self-reproduction mechanism based on virtual neurons with consequence-oriented prescription.

D. AI for Trusted HMI
Human-machine interaction (HMI) is a challenge for Human centred Artificial Intelligence (HAI) [48]. This field lies at the crossroad of several domains of AI and needs to be tackled in a holistic manner, including modelling humans and human cognition; acquiring, representing, manipulating abstract knowledge at the human level; reasoning on this knowledge to make decisions; and eventually instantiating those decisions into physical actions both legible to and in coordination with humans.
Modelling human cognition Natural Language Processing (NLP) is an important technique for improving the cognition of human-machine interaction. For decades, machine learning approaches targeting NLP problems have been based on shallow models (e.g., Support Vector Machine (SVM) and logistic regression) trained with very high dimensional and sparse features. In the last few years, neural networks based on dense vector representations have been producing superior results on various NLP tasks [49].
Robot Vision enables more natural interaction with humans. By adding visual understanding capabilities to a robot, it can perceive human action and can naturally interact with humans through these non-verbal behaviors such as body gestures, facial expressions and body poses. This requires robots to understand non-verbal behaviors. For example, a smart robot could be used to assist physicians in performing surgery, using near IR and 3D cameras [50].
Knowledge extraction and sharing between humans and machines is a dynamic process of human-machine interaction, the input of a decision problem and the output of the solution are converted into the knowledge that can be extracted by a machine. Collobert et al. [51] demonstrated that a simple deep learning framework outperforms most state-of-the-art approaches in several NLP tasks such as named-entity recognition, semantic role labelling, and Part-Of-Speech tagging.
Knowledge representation is to model the abstractions, and has the advantage of providing support for transformation to the user interface environment [52]. Adaptive knowledge representation is important for the automation of HCI engineering processes [53]. Devlin et al. [54] proposed a language model, bidirectional encoder representations from transformers, to pre-train deep bidirectional representations from unlabeled text.

V. CONCLUSIONS
In this article the key factors that could significantly affect the trustworthiness of RAS have been identified such as cyber security, safety, health, and interaction of RAS with humans. The performance/functionality of RAS represents the worthiness of RAS, subject to the four properties. We analysed the challenges of these properties related to the trustworthiness and reviewed the power of AI techniques in the implementation for RAS trustworthiness. Many ethical, societal and legal challenges await. AI has played significant roles in the development of trustworthy RAS but brings both potential benefits and risks. Machine learning-based AI systems trained with incomplete or distorted data can lead to biased -thinking‖, which may in turn magnify prejudice and inequality, spread rumours and fake news, and even cause physical harm. Hence, a new concept of human-centred AI (HAI) was raised. It emphasizes that the next frontier of AI is not just technological but also humanistic and ethical with three objectives: (1) to technically reflect the depth characterized by human intelligence; (2) to improve human capabilities rather than replace them; and (3) to focus on AI's impact on humans.