Optometrist’s Algorithm for Personalizing Robot-Human Handovers
Abstract
With an increasing interest in human-robot collaboration, there is a need to develop robot behavior while keeping the human user’s preferences in mind. Highly skilled human users doing delicate tasks require their robot partners to behave according to their work habits and task constraints. To achieve this, we present the use of the Optometrist’s Algorithm (OA) to interactively and intuitively personalize robot-human handovers. Using this algorithm, we tune controller parameters for speed, location, and effort. We study the differences in the fluency of the handovers before and after tuning and the subjective perception of this process in a study of non-expert users of mixed background – evaluating the OA. The users evaluate the interaction on trust, safety, and workload scales, amongst other measures. They assess our tuning process to be engaging and easy to use. Personalization leads to an increase in the fluency of the interaction. Our participants utilize the wide range of parameters ending up with their unique personalized handover.
I INTRODUCTION
There has been an increased interest in incorporating robots to accomplish specific tasks where human skill and experience are valuable. In such tasks, robots can work as assistants. Manipulators can provide the skilled user with an extra arm to hold more objects, move far-away objects closer and perform many other motions per the task requirements. While collaborating with a skilled human performing delicate tasks, it is preferred that incorporating robotic assistants does not require the human user to change the habits and techniques they have developed over the years. Instead, the robot may adapt its behavior to match the human partner.
Using Human-In-The-Loop control systems where the human can directly or indirectly influence the control signal is a promising approach to this problem. Such methods allow for a customized human-robot interaction. These controllers can be classical[4] or predictive systems [8]. However, expert knowledge and an understanding of the system’s parameters are often required to tune a robot controller to one’s liking [1]. Moreover, tuning of parameters is usually achieved by directly selecting parameter values through some interface[3, 2]. Even with expert knowledge, directly choosing the preferred parameter values can be difficult and time-consuming[1].
[scale=0.066]handover1-1.png
This work focuses on personalizing a controller for a collaborative task – robot-human handover. We introduce, implement and analyze a personalization algorithm – The Optometrist’s Algorithm – which allows non-expert users intuitive access to parameter tuning. A robot-human handover is the transfer of an object from the robot to a human collaborating with the robot[5]. We choose handovers as it is a widely studied fundamental robot skill with numerous applications in various situations.
II RELATED WORKS
In the effort to make robot-human handovers more natural and fluent, different control systems, e.g., [6, 4, 7] and learning frameworks, e.g., [10, 9, 11], have been established.
A study by Cakmak et al.[12] shows the importance of involving humans in the personalizing processes. The authors report that their participants preferred handover configurations learned from human examples over planned configurations, despite the planned configurations being more efficient regarding objective metrics.
A human-feedback controller for handovers was introduced by Kshirsagar et al.[3]. In this work, the human user gave the robot feedback based on task constraints. The participants directly tuned the parameters of the robot controller by choosing parameter values that should be used in the particular task through a GUI (sliders). The participants of this study described the need for a more accessible way to tune robot parameters.
Kupcisk et al.[11] have used human preferences and evaluative feedback to train a handover. They used the feedback to estimate a latent reward function and learned the handover using contextual policy search. This study tuned several non-intuitive control parameters, such as compliance and grip force. The feedback consisted of an absolute factor, which graded the interaction, and a preferential factor which comparatively rated the interaction against earlier ones. The authors have reported establishing that absolute feedback is used preferably when decisions are obvious (failure cases, very bad controller, excellent controller). In contrast, preference-based feedback is more accessible for human users to assess.
Building upon these insights, we present our method to tune robot-human handovers, according to user preferences, with little interactions111Depending on the problem size and search space and no need to understand or directly choose any values for any of the parameters involved. We present an easy-to-use method – the Optometrist’s Algorithm (OA) – which allows a non-expert user to indirectly tune several parameters of a handover controller. The OA accomplishes the tuning procedure inspired by the method when one chooses their glasses. We provide evidence that this type of comparison-based feedback allows for an intuitive tuning process, even when the parameters are hard to understand or when multiple parameters need to be tuned (sequentially222Note, the algorithm also allows for parallel tuning of multiple parameters, which we did not use in this study.).
While preparing the manuscript, we found the implementation of a similar algorithm for tuning experimental parameters in a Fusion experiment[13]. Our work introduces a distilled version of the same concept into robotics and HRI for personalizing a robot controller. Specifically, we study the use of this algorithm in tuning a handover and its effects on objective and subjective task metrics such as success and fluency. Furthermore, we assess the personalization of our controller and if any parameters affect the user more than the others. We also investigate whether our participants can identify slight variations in their tuned handovers after working for a while with the robot to assess the personalization.
III METHOD
In this study, we developed a customized robot-human handover controller for an object of fixed shape. The participants personalized this controller using the Optometrist’s Algorithm (OA) by repeatedly choosing between two options (comparative feedback).
III-A Handover Controller
The handover motion was developed using the Sawyer Robot’s motion controller. The reach actions were based on ’ROS MoveIt!’. The robot end-effector was commanded to reach the necessary coordinates in Cartesian space. A handwritten script controlled the robot’s actions and phases of the handover.
The robot picked up the object from a table (pick-up location) and moved to the starting location (both poses predefined and fixed throughout all experiments). From there, it began the ’reach’ phase of the handover and moved towards the user (location of the handover - proximity, side, and height - as well as the speed of the reach was tunable through the OA). Once the robot stopped, it waited for the user to initiate the object transfer. The user could take the object from the robot’s grasp by overcoming a force threshold (, tunable through OA). Once the object was released, the robot moved back above the pick-up location. Meanwhile, the user had to restore the object to the pick-up location and prepare for the subsequent handover. We chose this method of determining a handover location according to a fixed pose defined by parameters over others (like computer vision), as it standardizes the handover and its tuning for each sample and each participant. Also, the need to tune a pose in the robot’s reference frame provides us with a non-intuitive parameter.
The parameters that we tuned were as given in Table I. For each parameter, a lower and higher limit and a step size were determined empirically before the experiment, resulting in five sets of roughly ten parameter values each - to be tuned by the OA.
Parameter | Min | Step | Max |
---|---|---|---|
0.1 | 0.1 | 0.8 | |
0.8 | 0.025 | 1.0 | |
-0.2 | 0.075 | 0.2 | |
0.15 | 0.025 | 0.35 | |
13 | 2 | 23 |
III-B Optometrist’s Algorithm
The Optometrist’s algorithm (OA, see Algorithm III-B) was developed and used to determine a personalized set of parameters using a technique similar to an optometrist determining the correct power of one’s eye-glasses. The user was shown two options, variations of a single parameter, and was asked to choose the preferred option. Once the tuning of a parameter converged, the OA moved on to the next one. One of the options was the handover preferred in the previous step, and the other was a new option (with the obvious exception of the very first step). After the user chose an option, the algorithm stepped, varying the parameter by one step size (). The previous choice decided the direction of the step. If the new option was chosen, the next step was taken in the same direction as the new option. If the same option was chosen again, then the direction for the next step was flipped. This allowed us to parse and present the entire parameter range to the user, possibly accessing parts of previously disregarded parameter space if needed. The process finished when the difference between the two options to be presented to the participant became smaller than the step size or when the same option was chosen four times in a row (this second stopping criterion is omitted in Algorithm III-B for readability). We predefined the order in which we tuned the parameters as Speed, Position , and Force. In a few iterations of this exercise, the algorithm converged to the set of parameters that the user preferred the most.
The Optometrist’s Algorithm {algorithmic} \State \State \State \State \State \CommentRun handover with parameter L \State \State \If \State \State \Else\State \State \EndIf\While \State \State \State \If is \If \State \State \Else\State \State \EndIf\State \ElsIf is \State \If \State \Else\State \EndIf\State \EndIf\EndWhile
The OA presented here is optimized for small-scale problems with fewer parameters and a limited range. However, it is possible to tune several thousand parameters at a time, even with more extensive ranges, by introducing probabilistic steps and tuning sets of dependent parameters [13]. In any case, setting up the parameters and their ranges takes some expert knowledge and effort.
IV EXPERIMENT
IV-A Task and Protocol
The participants performed robot-human handovers during two experiment phases, preceded by short practice sessions each. A tuning and an evaluation phase were performed sequentially in a single sitting of about 20 minutes. During the experiment, the participants were seated in a fixed chair in front of the robot, with a table being the shared workspace of the human-robot team (Fig. 1). To obtain qualitative data, the participants were asked to complete questionnaires before and after the experiment (see Sec. IV-C). The first practice session of five handovers before the tuning phase was for participants to get used to the robot’s actions and the task. The parameters used for these handovers were a fixed set of near-average parameters, the same for each participant.
In the tuning phase, the participants interacting with the OA were told what aspect of the handover they were tuning in layperson’s terms (speed, position, and the force of the grip). The participants had to verbally announce their choice to the experimenter, who then used a simple interface to register the choice. In the second practice session of five handovers (between the tuning and the evaluation phases), the participants were asked to perform the personalized handovers. The handovers performed in these practice sessions were used to study the differences between the handovers before and after tuning. The participants were again shown two options in the evaluation phase. One of the options was their personalized handover, and the other was a different one - similar to the preferred handover, but with slight noise added to one of the parameters. Participants were asked to identify their tuned handover. Note that as only one parameter was varied at a time and within a limited amount (1-2 times the step ), the handovers were still relatively similar to each other, and it was more challenging for the participants to find their own handover (compared to varying more parameters and in a broader range).
[scale=0.4]oa-flowchart.png
[scale=0.22]Questionnaire_questions-1.png
IV-B Participants
The experiment was performed with participants (Ages: , years, Females), all of whom volunteered to be in the experiment for no compensation. All the participants were healthy university students or researchers. The participants were from a wide range of professional and educational backgrounds. The experiment was approved by the ethical committee of the Ben-Gurion University’s Department of Industrial Engineering and Management.
IV-C Subjective Metrics
The participants were asked to complete the ”Technology Adoption Propensity (TAP)” [14] and ”Negative Attitude towards Robots Scale (NARS)” [15] questionnaires before the experiment. After the experiment, the Technology Acceptance Model (TAM)[16] was used to study the acceptance of the personalization task based on factors such as ’Ease of Use,’ ’Perceived Usefulness,’ ’Attitude’ of the users, and their ’Intention of Using’ this method in the future. The questionnaire used was based on questionnaires established in earlier studies[17],[18]. The ’Effort’ component of the ’Ease of Use’ variable was evaluated using the NASA-TLX Questionnaire [19] for perceived workload and effort. For ’Trust’, the ’Competency’ and the ’Reliability’ sub-scales of the Multi-Dimensional Measure of Trust (MDMT)’ [20] were used. All evaluations were based on a 7-point Likert scale (unless stated otherwise).
We also studied the subjective perception of factors such as safety and fluency of the handovers. Safety was evaluated using the Godspeed Questionnaire for Perceived Safety [21], and the fluency questionnaire was based on the works of Hoffmann [22] and Paliga et al.[23]. The subjective questionnaire is detailed in Fig.3
IV-D Objective Metrics
IV-D1 Handover Success
The success rate of handovers was calculated according to and included all the handovers in the tuning, practice, and evaluation phases. The success of a handover in an interactive adaptation scenario is of utmost importance, as more failures may lead to sub-par tuning. As this study included participants from varying backgrounds, with the majority having never used a robot before, the success of a handover was crucial in having the non-expert user trust the robot [24, 26]. Due to this reason, the controller was designed to be robust. Hence the handover success rate was high during the experiment. It is important to note that while a high handover success rate does not imply successful use of the OA, it ensures that a failure-ridden robot interaction does not influence the users’ perception of the effectiveness of the OA. Therefore we report on the general success rate throughout the experiment rather than comparing before and after the tuning phase (they are similarly high by design). Failed handovers in the experiment were defined as any handover instances that did not result in a successful object transfer (drops, non-release of object, software, and communication failure of the robot). The experimenter manually recorded the failures during the experiment.
IV-D2 Fluency
Fluency is a commonly measured and studied characteristic of a human-robot interaction[22]. In this work, we use the fluency metrics defined by Hoffmann [22]. We use four metrics and correlate these metrics with the subjective perception of fluency. These metrics are defined as below:
-
•
Robot idle time (R-IDLE): Ratio of total task time spent without performing an activity by the robot.
-
•
Human idle time (H-IDLE): Ratio of total task time spent without performing an activity by the human.
-
•
Concurrent Activity (C-ACT): Ratio of the task time spent while both the agents perform an activity.
-
•
Functional Delay (F-DEL): Ratio of the task time spent between the end of one agent’s activity and before the beginning of the other agent’s subsequent activity.
While we list, define, and evaluate C-ACT, its relevance for our handover task and controller is limited. This is because we have programmed constant delays into the interaction, and the overlap of concurrent activity in our interaction is negligible.
IV-D3 Needed Tuning Steps
We counted the steps required for each parameter to be tuned using the OA. This parameter is necessary to study the user effort in tuning the controller. Each extra trial adds approximately seconds to the tuning phase and requires the user to perform two more handovers. This also indicates which parameter was easy to tune and if any parameter confused the participants.
IV-D4 Identifying Tuned Handovers
In the evaluation phase, we asked the users to identify their tuned handovers among similar handovers. This yielded evidence about the personalization process’s success and gave us an idea of the parameters that the participants regarded highly by being able to identify minor discrepancies in them.
V RESULTS AND DISCUSSION
The experiment task required an average of minutes and required the users to perform handovers, including practice, tuning, and evaluation phases. All users performed five handovers in each practice session and ten handovers in the evaluation session (20 out of about 60 handovers). participants showed a negative attitude towards robots, while showed a moderately positive attitude on the NARS Scale (mean score ). On the TAP scale, all the participants showed a high propensity for adopting new technology (mean score ).
V-A Success Rate
The experiment had a very high handover success rate ranging from to (), including the handovers during tuning.
Of the failures, three occurred because the Robot’s motion controller failed to generate a collision-free trajectory. Four failures were due to false triggering of the transfer. Once, a participant dropped the object with no fault of the robot. For the failures during the tuning phase, the pair of options was repeated to continue tuning.
V-B Personalization
The distribution of parameters preferred by the users is shown in Fig.4. It is clear from the data that different participants enjoy different parameter values and can tune for them. The participants preferred to tune the handovers to be at a proximal handover location rather than the mean parameter value (). But most people decided not to let the robot come too close. While choosing the side of the handover, participants preferred to keep the handover location at the center. Of the people that preferred to choose either the left or right side ( out of ), all decided to choose the side of their dominant hand. Most participants choose to keep the handover height lower than the mean (starting) value of the parameter (). The users preferred handovers faster than the mean speed (). While this parameter can also be task-dependent, even non-experienced users choose to go for higher speeds in robots. Participants preferred low values of threshold trigger force () compared to the parameter range’s mean. The users tried to minimize their efforts while performing the handovers, as reported in [22]. Many users admitted that they would have liked to choose a lower force than the minimum of the parameter range. However, in this study, we were limited by the design of the controller (robustness).
[scale=0.8]charts-1.png
The entire tuning process took between minutes and and steps (), amounting to twice the number of handovers performed. Tuning the handover height required the most steps ( and sec) while tuning the side of the handover location (left or right) required the least steps ( steps and sec). Despite every user needing to perform approximately handovers for the personalization, the users perceived this activity as a low workload on the NASA-TLX Questionnaire (score on a 7-point Likert scale).
V-C Evaluation Phase
Most participants could identify their handover between variations in speed ( out of ), followed by variations in threshold force values required to trigger the transfer ( out of ). For variations in handover location, the participants performed the worst ( out of ). In five attempts, the average participant scored . The participants who performed the best were those who had tuned the values to the extreme ends of the parameter range.
V-D Subjective Measures
The users evaluate the personalization process very positively through the subjective questionnaire (Fig.5). The process is primarily perceived as useful () with out of users. out of users agreed that the method was easy to understand. The users also showed a positive attitude towards this method, especially in its ’Engagement’ component, with out of participants strongly agreeing to have been completely focused on the activity. Users (, out of ) generally agreed () that they might use this method in the future. Of the four users who disagreed, one reported that the tuning process did not produce a desirable handover. Two users mentioned that the process was ”too repetitive” and was ”not enjoyable”, while one user felt that ”the robot did not respond to their feedback”.
[scale=0.65]questionnaire_scores-1.jpg
On the Multi-Dimensional Measure of Trust (MDMT) scale, the user scored the robot highly on both the Reliability () and the Competency sub-scales (). The users also perceived high robot safety, scoring on a five-point scale of the Godspeed questionnaire. The user perception of trust and safety did not change even for those participants who experienced one of the few handover failures. Few users reported that they were at times surprised by some of the options that were ”too fast” or when the robot ”came too close”. However, all the users rated the task highly on the trust and safety scales. A low correlation was observed between success rate and safety () and between success rate and trust ().
V-E Fluency
Our metrics have suggested a high perception of both subjective fluency () and improvement in fluency () after tuning. The significance of all the metrics was evaluated using a two-tailed test. A significant increase in H-IDLE () and a highly significant decrease in F-DEL was observed after personalization. Both these observations correspond to an increased objective fluency after personalization [22]. Comparing fluency metrics to subjectively perceived fluency, there was no correlation between either change in H-IDLE () or F-DEL (). This implies that the handover fluency improved objectively, but not all participants could perceive it. An inconsistent subjective perception of fluency in a proactive coordination task, such as ours (the robot did not wait for a signal from the human before starting the handover), is already reported by Huang et al.[25].
There was no significant change in C-ACT (), which was expected, as the delays programmed in the controller were constant, to make the tuning interaction more consistent. After personalization, R-IDLE showed a highly significant decrease (). A major factor contributing to R-IDLE was the robot waiting for the user to initiate the object transfer. Initially, as the handover location was not personalized and the force to trigger the transfer was not preferred by most users, this phase took longer. This metric can also be interpreted as task efficiency in terms of robot usage during the task. This task efficiency improves significantly after personalization.
VI CONCLUSIONS
In this work, we presented using the Optometrist’s Algorithm to personalize robot-human handovers within a few interactions. We showed that our algorithm provides non-expert users with an easy, engaging, effortless, and, thus, effective method to personalize their handovers intuitively. We successfully tuned intuitive parameters like speed and non-intuitive parameters like the handover pose. Personalization appears successful as participants cannot only tune their controllers to their liking but also differentiate between their control settings and perturbed versions of their controllers. On a more fundamental level, all our participants end up with individual controllers and utilize various parameters. This is again in contrast to the most optimal solution according to objective metrics, where the robot moves, e.g., as fast as possible to the participants.
We saw that most participants preferred a quick and effortless handover that was proximal but not too close to them. The handover speed was the most important parameter affecting user preference. Our human-in-the-loop personalization process also improved the objective fluency of the handovers. This did not fully reflect in the subjective metrics – which we attribute to the nature of our task (being proactive). Also, user preferences can generally depend on the specific task or be object-specific. Both are possible variations of our study that remain for future work.
A further variation would be to apply the algorithm to tune several parameters at once, which is especially useful for a larger number of parameters. While the potential of this class of algorithms to tune a large number of parameters has already been established in other fields[13], its application and effectiveness for tuning a robot’s behavior, with a non-expert user in the loop, when dependent on many more and larger parameter spaces then we used, remains to be shown.
ACKNOWLEDGMENT
This research was supported by Ben-Gurion University of the Negev through the Agricultural, Biological, and Cognitive Robotics Initiative, the Marcus Endowment Fund, the W. Gunther Plaut Chair in Manufacturing Engineering, ISF Grant 1651/19, and the Lynn and William Frankel Center for Computer Science, the Israeli Ministry of Aliyah and Integration as well as BITS-Pilani K. K. Birla Goa Campus, India.
References
- [1] Simon, A. M., Ingraham, K. A., Fey, N. P., Finucane, S. B., Lipschutz, R. D., Young, A. J., & Hargrove, L. J. (2014). Configuring a powered knee and ankle prosthesis for transfemoral amputees within five specific ambulation modes. PloS one, 9(6), [e99387]. https://doi.org/10.1371/journal.pone.0099387
- [2] A. Alili, V. Nalam, M. Li, M. Liu, J. Si and H. H. Huang, ”User Controlled Interface for Tuning Robotic Knee Prosthesis,” 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021, pp. 6190-6195, doi: 10.1109/IROS51168.2021.9636264
- [3] A. Kshirsagar, R. K. Ravi, H. Kress-Gazit and G. Hoffman, ”Timing-Specified Controllers with Feedback for Human-Robot Handovers,” 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Napoli, Italy, 2022, pp. 1313-1320, doi: 10.1109/RO-MAN53752.2022.9900856.
- [4] Micelli, Vincenzo; Strabala, Kyle; Srinivasa, Siddhartha (2018). Perception and Control Challenges for Effective Human-Robot Handoffs. Carnegie Mellon University. Journal contribution. https://doi.org/10.1184/R1/6557429.v1
- [5] Costanzso M, De Maria G, Natale C. Handover Control for Human-Robot and Robot-Robot Collaboration. Front Robot AI. 2021 May 7;8:672995. Doi: 10.3389/frobt.2021.672995. PMID: 34026858; PMCID: PMC8138472.
- [6] A. Kshirsagar, H. Kress-Gazit, and G. Hoffman, ”Specifying and Synthesizing Human-Robot Handovers,” 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 2019, pp. 5930-5936, doi: 10.1109/IROS40897.2019.8967709.
- [7] J. R. Medina, F. Duvallet, M. Karnam, and A. Billard, “A human-inspired controller for fluid human-robot handovers,” in IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016, pp. 324–331.
- [8] Yang, Wei & Sundaralingam, Balakumar & Paxton, Chris & Akinola, Iretiayo & Chao, Yu-Wei & Cakmak, Maya & Fox, Dieter. (2022). Model Predictive Control for Fluid Human-to-Robot Handovers.
- [9] A. Kshirsagar, G. Hoffman and A. Biess, ”Evaluating Guided Policy Search for Human-Robot Handovers,” in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3933-3940, April 2021, doi: 10.1109/LRA.2021.3067299.
- [10] Wu, Min & Taetz, Bertram & He, Yanhao & Bleser, Gabriele & Liu, Steven. (2021). An adaptive learning and control framework based on Dynamic Movement Primitives with application to human–robot handovers. Robotics and Autonomous Systems. 148. 103935. 10.1016/j.robot.2021.103935.
- [11] A. Kupcsik, D. Hsu, and W. S. Lee, ”Learning dynamic robot-to-human object handover from human feedback,” Robotics Research, vol. 1, p. 161, 2017.
- [12] Cakmak, Maya & Srinivasa, Siddhartha & Lee, Min Kyung & Forlizzi, Jodi & Kiesler, Sara. (2011). Human preferences for robot-human hand-over configurations. 1986-1993. 10.1109/IROS.2011.6048340.
- [13] Baltz, E.A., Trask, E., Binderbauer, M. et al. Achievement of Sustained Net Plasma Heating in a Fusion Experiment with the Optometrist Algorithm. Sci Rep 7, 6425 (2017). https://doi.org/10.1038/s41598-017-06645-7
- [14] Ratchford, Mark & Barnhart, Michelle. (2011). Development and Validation of the Technology Adoption Propensity (TAP) Index. Journal of Business Research. 65.1209.10.1016/j.jbusres.2011.07.001.
- [15] Syrdal, Dag Sverre & Dautenhahn, Kerstin & Koay, Kheng & Walters, Michael. (2009). The Negative Attitudes Towards Robots Scale and Reactions to Robot Behaviour in a Live Human-Robot Interaction Study
- [16] Davis, Fred D. ”Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13, no. 3 (1989): 319–40. https://doi.org/10.2307/249008.
- [17] Krakovski M, Kumar S, Givati S, Bardea M, Zafrani O, Nimrod G, Bar-Haim S, Edan Y. ”Gymmy”: Designing and Testing a Robot for Physical and Cognitive Training of Older Adults. Applied Sciences. 2021; 11(14):6431. https://doi.org/10.3390/app11146431
- [18] Avioz-Sarig, O. Robotic System for Physical Training of Older Adults. Master’s Thesis, Ben-Gurion University of the Negev, Beersheba, Israel, 2019.
- [19] Domen Novak, Benjamin Beyeler, Ximena Omlin, Robert Riener, Workload Estimation in Physical Human-Robot Interaction Using Physiological Measurements, Interacting with Computers, Volume 27, Issue 6, November 2015, Pages 616–629, https://doi.org/10.1093/iwc/iwu021
- [20] Ullman, D., & Malle, B. F. (2019). Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust. In Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction, 618-619.
- [21] Bartneck, C., Croft, E., & Kulic, D. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71-81.
- [22] G. Hoffman, ”Evaluating Fluency in Human–Robot Collaboration,” in IEEE Transactions on Human-Machine Systems, vol. 49, no. 3, pp. 209-218, June 2019, doi: 10.1109/THMS.2019.2904558.
- [23] Mateusz Paliga, Anita Pollak, Development and validation of the fluency in human-robot interaction scale. A two-wave study on three perspectives of fluency, International Journal of Human-Computer Studies, Volume 155, 2021, 102698, ISSN 1071-5819, https://doi.org/10.1016/j.ijhcs.2021.102698.
- [24] Krakovski, Maya & Aharony, Naama & Edan, Yael. (2022). Robotic Exercise Trainer: How Failures and T-HRI Levels Affect User Acceptance and Trust. 10.48550/arXiv.2209.01622.
- [25] C. Huang, M. Cakmak, and B. Mutlu, ”Adaptive coordination strategies for human-robot handovers,” Proc. Robot.: Sci. Syst. Conf., 2015.
- [26] Reißner, Nadine & El Faramawy, Samir & Kraus, Johannes & Baumann, Martin. (2021). The role of successful human-robot interaction on trust – Findings of an experiment with an autonomous cooperative robot.