Effective Human-Robot Collaboration Through Wearable Sensors

With the developments of collaborative robots in manufacturing, physical interactions between humans and robots represent a vital role in performing tasks collaboratively. Most conducted studies focused on robot motion planning and control during the execution of a task. However, for effective task distribution and allocation, human physical and psychological status are essential. In this research, a hardware setup and support software for a set of wearable sensors and a data acquisition framework, are developed. This can be used to develop more efficient Human-Robot collaboration strategies. The developed framework is intended to recognise the human mental state and physical activities. Subsequently, a robot could effectively and naturally perform the given task with the human. Besides, the collected data through the developed hardware enables online classification of human intentions and activities; therefore, robots can actively adapt to ensure the safety of the human while delivering the required task.


I. INTRODUCTION
As robots start to arise from the fence of confined workspace in industrial shop floors and access shared areas of collaborative partners with people, one of the most critical aspects of human-robot collaborations (HRC) is communication. The communication mechanisms might include universal mechanisms such as speech, hand gestures and facial expressions which can apply to any technology. Nevertheless, the physical copresent on a robot in the vicinity of a human operator must have more communication clues about the human attention, intention and physical and psychological status to achieve efficient HRC tasks. As in many other technologies, it is believed that HRC will change the way people work and live, yet robots must effectively adapt to humans needs and behaviours.
Human-robot communication plays an essential role in HRC scenarios [1]. The human-robot communication, can be defined as the physiological (e.g. mental workload) and physical (e.g. haptic input) measurements of the human operator during the collaboration with the robot. Several researchers introduced HRC communication approaches, which are motivated by the fact that such communications can significantly enhance the performance of HRC [2]. The capability of the robot to react, based on human interaction, is an essential hallmark of successful HRC tasks, considering there are several scenarios in manufacturing applications such as co-manipulation and coassembly. Furthermore, an effective HRC with wearables is believed to be safer [2]. For example, in an HRC a robot can interfere and assist with heavy payloads if the human operator is struggling, which could otherwise potentially lead to a human injury.
A critical application of HRC is when humans and robots perform joint tasks in a shared workspace [3]. In order to communicate the intentions of the human to the robot, sensors are utilised such as an Electromyogram (EMG) [7], Inertial Measurement Unit (IMU) [14]. The EMG signals are typically acquired from a human upper-limb since they are mostly used in the given tasks [7]. The acquired data can be utilised to communicate movement intentions. They can also provide insights into human muscle fatigue [8]. In co-manipulation HRC task, a robot could assist a human operator during a heavy pull or push of an object or adapt its behaviour to create more ergonomic working conditions for its human co-worker [8], [10]. This is not only intended to prevent injuries, but also long-term health issues related to physical fatigue [10].
The main components of the existing HRC framework are illustrated in Figure 1. This diagram demonstrates the essential block of HRC, namely the human operator, the robot and the surrounding environment (shared workspace). Generally, robots are equipped with sensors and controllers that can identify the robot's states. These sensors consist of optical encoders to measure joints position, and through controllers (forward kinematics module), a robot's Cartesian position can be estimated. Also, robots are equipped with sensors that measure physical values related to kinematics and dynamics of the robot on the joints and/or at the endeffector. Moreover, there are additional sensors to measure the motors' temperature, current and voltage. These physical values and thier interpretation are essential to control the robot within the workspace efficiently. Nevertheless, they are not adequate, especially in an unstructured environment and in HRC scenarios. Hence, external observers, such as motion trackers and vision systems, are required to identify objects and their position and orientation within the workspace. Robot Operating System (ROS) provides sufficient support (functionality) for robots within the surrounding environment through effective communication of objects positions, orientations and many other physical entities. This allows robots to adjust their behaviour accordingly based on sensory data from the robot's sensors and external observers (environment). As an example, ROS supports robot trajectory planning, collision avoidance and many other essential robotics functionalities. However, for the third elementary elements in HRC scenarios; which is the human (Figure 1), the current ROS supporting functionality is fairly limited.
In [13] a survey published in 2019 introduces the current challenges of HRC systems from robot manufacturers and system integrators. This survey summarises the challenges as the design of suitable HRC workstations and the application of essential safety. These challenges, however, could be addressed by developing a suitable interface between human and robot in HRC applications. Henceforth, the main focus of this paper is to facilitate communication between human and robot in HRC applications using a hardware/software framework. For this purpose, implicit human physical and psychological states such as muscle fatigue, frustration and anxiety, can be identified, and the robot can interact accordingly. Such abilities, alone or in conjunction with other capabilities enable intuitive, effortless, efficient and safe human-robot interactions. In this paper, a framework that facilitates the existent ROS framework of such modules, is presented. This is intended to support various HRC scenarios, and outlines the first step towards achieving the following contributions: 1) Behavioural-modelling: By facilitating wearable sensors integration, human behavioural reactions and interaction with the robot can be modelled and used in HRC scenarios.

2) HRC evaluation: One of the open questions in HRC
is the quantitative evaluation of productivity (production rate). It is believed that providing physical and psychological insights on human behaviour can directly contribute towards providing more efficient evaluation mechanisms. 3) Effective and safe HRC workspace: The integration of wearable and bio-mechanical sensors will allow robotises to select the most relevant sensors and physical measurements for the given HRC tasks. Thus, it helps to design a suitable workplace for HRC applications. The remaining structure of this paper is as following: Section II illustrates the available bio-mechanical sensors with a brief literature review. The proposed hardware and software framework is explained in Section III. Section IV outlines an illustrative robot teleoperation example using muscle activity signal. Finally, Section V presents the conclusions and future work.

II. OVERVIEW OF WEARABLE SENSORS IN HRC
In HRC, wearable-sensing information systems can be deployed for human-robot communication, such as during comanipulation and hand-over tasks [29]. These sensors can measure various biological activities that reflect human status. Researchers have utilised various forms of sensing mechanisms and metrics, such as head pose, eye gaze, facial tem-perature, hand position and orientation, speech, force/torque, body posture, and other biological signals. Generally speaking, the signals that are employed for monitoring human activity could be classified as biological and non-biological signals [17]- [21]. For instance, in [14] wearable IMU's were used to identify human position and movments trajectory while working with a mobile robot. Also, Xie et al. [30] employed haptic flexible sensors to estimate interaction force besides other wearable sensors for HRC. Physiological sensors, such as Electrooculogram (EOG) [22], Electrocardiogram (ECG) [23], Electroencephalogram (EEG) [24], Magnetoencephalogram (MEG) [26], and EMG [25], are capturing signals generated from the human body and can infer important information. Lately, these signals have been broadly used in HRC systems to predict intention of human operator [26]- [28].
An example of how EEG signal can be used to control mobile robot has been introduced in [15]. Nevertheless, the authors highlighted that the presented work on using brain signal to control robots still not mature and further research is required [15]. Also, this research emphasises on the potential of data collections and machine learning approaches that can interpret the brain signals. In addition to that, the importance of combining brainwaves data with other measurements for more accurate control of robots using EEG signals have been highlighted by [16]. To sum up, wearable sensors have a high potential to revolutionise HRC. However, it is still challenging due to the lack of data, lack of understanding of biological signals and suitable machine learning tools. Hence, this paper provides a framework that allows to overcome some of these challenges and allows to focus on developing suitable machine learning algorithms for HRC applications. In the rest of this section, a brief introduction of the wearable sensors used in the proposed framework is given.
1) Cardiovascular Signals: Cardiovascular Signals are measured and monitored with an ECG. The ECG can detect a hearts responses to stimuli. Responses can be an increased/lowered heart-rate, heart-rate variability, blood pressure, and blood-volume pressure [3]. The recorded data can be interpreted as the occurrence of stress, to measure mental effort, and various emotional sates. Generally, an increase in heart-rate over time or a decrease in heart-rate variability is linked to a higher mental workload [3]- [5].
2) Brainwaves: Brain Activity can be measured with an EEG sensor. This technology requires electrodes to be placed across the scalp to measure activities in the cerebral cortex [4]. At least two electrodes need to be placed in order to measure the electrical activity that occurs once neurons are stimulated [6]. There are five main frequency bands. The higher the frequencies, the more active states are associated with it: from being drowsy, towards relaxed, then active thinking and focus, to alertness [7], [8]. Lowered alpha waves (8-22Hz) and increased Gamma waves (30Hz-100Hz) are associated with higher workloads and stress [9], [10].
3) Nose Temperature: The human internal temperature strongly correlates with physical and psychological states [10]. The reaction to stimuli increases or decreases blood flow which leads to variation in skin temperature [11]. The variability in skin temperature is often measured with thermal imaging [12]. Workload-induced thermal changes are mainly detected in facial regions [11]. Among other facial regions, the nose tip is regarded as providing the most consistent indications of stress. Once stressful conditions apply, the nasal temperature decreases [10]. Although physiological sensor data is often grouped in categories, which might lead to the impression that consistent interpretations might be established, they often lack a single monolithic interpretation due to a wide variety of subject specific characteristics [4].

A. Interaction Forces
As robots are required to operate in close proximity with a human operator, there is a crucial need to measure interaction forces between the human and a robot [31], [33], [34]. The conventional technique to measure such force can be measured using multi-axis force sensors [32]. These sensors are essential in many robotics fields such as manipulation and assembly tasks, which varies from micromanipulation tasks [39], to large industrial tasks [40]. These applications focus on the interaction between a robot and the environment. The main draw-back of this method is the fact that it can measure the force only in one location (where the sensor is attached). Another approach is based on tactile sensors that can be utilised to detect touch [31]. Tactile sensores can be classified as hard skin [41], soft skin [35] and intrinsic measurements. The hard skin consists of a set of sensors, that are embedded in a shell that cover the the robot arm. Similarly, the sensors are embedded in soft material for the soft skin approach.
However, the intrinsic approach depends on multi-axis sensors in addition to joints' motor current to estimate interaction forces at any point on the robot body [37].
Within the topic of interaction force, the state-of-the-art is not only collisions detections but also advances beyond to an active planning for recovery in faulty scenarios, when a fault occurs [38]. Moreover, soft skin approaches have been used as wearable sensors in [41], which can be employed in HRC applications.

III. OVERVIEW OF SYSTEM ARCHITECTURE
In this section, the proposed holistic architecture of the wearable sensor network is explained. Figure 2 depicts the main components in the proposed architecture, which are the main workstation (ROS master), robot node, external observer (i.e. vision-based human posture sensor), and wearable sensors. These wearable sensors are connected with the ROS network via WiFi. The wearable sensors include a heart-rate sensor, muscle activity sensors, an EEG sensor to measure brainwaves, a nose temperature sensor and a head movement sensor.
In this section, a brief description of the hardware and software architecture will be introduced. Figure 3 shows the blue-box with all sensors connected to it. This box can be carried on the operator's belt, and it is connected with the workstation via WiFi. The blue-box is connected to four Myoware muscle activity sensors, nose digital temperature and Phedgit IMU 1 for head It is worth highlighting that this research is work in progress. Therefore, further design refinements to make the hardware setup more user-friendly are likely to occur in the future. The presented setup, however, can give crucial information regarding correlation amongst different sensors and task which will assist in future improvements. The blue-box is a Raspberry-Pi with two interface circuits and a USB Bluetooth dongle. The Myoware muscle activity sensor; shown in Figure 4, measures the surface EMG signal. The voltage supply is between +2.9V to +5.7V , and it has two output modes, namely raw data and filtered data. In the proposed framework, the use of four Myoware sensors (two for each arm) is proposed. The interface circuit between the Raspberry-Pi and the sensors are shown in Figure 5. This circuit is composed of a 16 bits Analogue-to-Digital Converter (ADC), four Myoware sensors and four resistance.

A. Hardware Setup 1) Blue-box:
The nose digital temperature sensor is also interfaced with the Raspberry-Pi using a pull-up resistor. The sensor output is connected to the Raspberry-Pi digital input. The specification of the digital thermometer is illustrated in Table I. It is worth here mentioning that there are several ways to measure facial temperature, such as a thermal camera. However, the idea here is to show one possible hardware implementation and the software infrastructure that works with any hardware implementation.
2) Heart-rate Zero-Pi: The Zero-Pi is connected with the workstation via WiFi, and powered by a lithium battery. The circuit diagram of the interface circuit between the pulse sensor and the Zero-Pi is shown in Figure 6. The pulse sensor clips onto an earlobe and can measure the heart pulse  rate based on the volumetric flow of the blood within the veins. The operating voltage is +5V or +3.3V, and current consumption is about 4mA. The working of the pulse beat sensor is straightforward, and it is composed of an optical transmitter and receiver, as shown in Figure 7. The sensor has two sides; on the first side, an LED is placed along with an ambient light sensor. On the other side, there is an electronic circuit for the amplification and noise cancellation on the received light from the first side. The LED on the front side of the sensor is placed over a vein in the human body. The emitted light from the LED will penetrate the vein directly. Blood vessels will reflect the LED light, which will be   received by the light sensor. The light reflection is correlated with the density of blood vessels; accordingly, blood flow and heart-rate can be estimated.

B. Software Setup
To develop ROS packages that support the developed hardware, software must connect to the hardware through developed drivers to faclitate the read of the measured data. Then, the received data needs to be processed and finally published in the ROS network ( Figure 2). Some of the hardware developers have provided ROS packages, and they can be directly used; such as the IMU sensor and Force/Torque sensors. However, there are other sensors which do not have ROS packages available; such as muscle activity sensors, nose temperature and heart-rate.
In general, ROS supports most common messages from different types of sensors that are related to the robot or objects within the workspace. However, generic recognised messages to communicate human physiological metrics are still missing. In this section, a set of messages to communicate the physiological status in the HRC context is introduced. All developed messages consist of headers and data fields. Message headers enable further processing and synchronisation with other messages from the robot or a controller, as each message contains ROS time in the header field.
1) Heart-rate messages: Heart-rate messages provide common measurements of the heartbeats, namely: raw data (Table  II), Inter-Beat Interval (ibi) Table III, and Beat-Per-Minute (BPM ) Table IV. As shown in the tables mentioned above, all of these messages are composed on header and data (raw data, ibi and BPM) fields. 2) Brainwave messages: This message is explicitly developed for the Muse headband. The message is composed of the following channels TP9, TP10, AF7 and AF8, which are EEG scalp nodes. Also, the Right Auxiliary (Right AUX) signal is an extra signal that can be used when the hardware is extended  with an additional external sensor (brainwave). The message structure is shown in Table V. 3) Muscle-activity messages: The hardware setup supports four Myoware muscle activity sensors, two for each arm. Hence, the message is composed of four muscle EG signals in addition to the header, as shown in Table VI.

4) Nose temperature messages:
The nose temperature sensor is connected to the Raspberry-Pi digital input, and the developed ROS message directly supports a float value with a header as illustrated in Table VII.

IV. ILLUSTRATIVE EXAMPLE
To illustrate how the proposed hardware/software works, this section outline a Human-Robot Teleoperation example based on muscle activity. The experimental setup is composed of muscle activity sensor connected to the blue-box (wired), a UR10 6-axis robot and a stationary workstation. Figure 8 shows the experimental setup and flow chart of the muscle gesture control. This example requires four ROS nodes, which are muscle activity sensors node, muscle activity processing node, group commander (MoveIt) node and robot node. The logic behind this example is as follow: if the muscle activity of right arm is higher than the left arm, the robot will move 5 cm in Y direction and −5 cm in Y direction if the opposite is true. The muscle activity node publishes EMG messages for the arm and forearm. Then, the processing node filter outliers using a moving-average window; after that, it checks the values of the EMG signal and sends a string message to the group commander. The group commander node generates a robot commands to modify the current robot Cartesian position to move based on the activated muscles. The main goal of the illustrative example is to demonstrate how the overall systems can be easily integrated with robotic setup. The developed messages and their sampling frequencies, as shown in Figure  9. Another important functionality that can be achieved is the synchronisation of different messages such as the force/torque message with the muscle activity messages, as shown in Figure  9. Finally, the illustrative example is shown in a video on the GitHub respiratory page.

V. DISCUSSIONS AND CONCLUSIONS
In HRC applications, there are still open questions that have to be addressed to have a safe, efficient collaboration between human and robots. Wearable sensors deemed to be part of the solution to overcome these limitations. For instance, wearable sensors with analytical tools can be utilised to reliably predict the human intentions in HRC. Because wearable sensors are capable of capturing changes in brainwaves, heart-rate and muscle activities in advance. Moreover, these sensors provide insights into the physiological status of the human body, which represent an intuitive way of communication between the human and the robot. Most of the reviewed work in this field focus on one aspect and one wearable sensor in the HRC application. However, a holistic framework that combines more wearables and more metrics is the next step to advance the HRC field. Hence, this paper presented a sensor framework for HRC applications which can support data collection and integration during various HRC scenarios. Also, this paper presents a brief description of various sensors that can be utilised in HRC applications. This is intended to further improve HRC by increasing effectivity and safety. The software development was conducted as a ROS package, which was made available and can be found in the Intelligent Automation GitHub 2 .
The main advantages of the proposed framework are that it is scalable in term of integrating more sensors and to develop . ROS topics more custom suitable ROS messages. Also, this framework facilitates the use of analytical tools to process and analysis captured data during the execution of the HRC tasks. Nevertheless, this is only an initial step toward more efficient HRC systems, and there are still many limitations, such as the development/selection of the analytical tools to understand the sensory data and how to map these sensory data into actions within the HRC while maintaining safety and productivity.
The proposed framework (as a proof of concept) can assist researchers to overcome some of the HRC challenges as outlined in this paper. The presented framework can provide a guideline in a suitable workplace for HRC by determining the practical hardware/software implementation. In order to validate the proposed framework, further experiments are intended to demonstrate its effectiveness to overcome the challenges within HRC.
ACKNOWLEDGMENT This work was funded by the EPSRC as part of the Digital Toolkit for optimisation of operators and technology in manufacturing partnerships project (DigiTOP; EP/R032718/1).