Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics
Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics
Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics
Ebook444 pages4 hours

Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics focuses on the issues that come to bear when humans interact and collaborate with robots. The book dives deeply into critical factors that impact how individuals interact with robots at home, work and play. It includes topics ranging from robot anthropomorphic design, degree of autonomy, trust, individual differences and machine learning. While other books focus on engineering capabilities or the highly conceptual, philosophical issues of human-robot interaction, this resource tackles the human elements at play in these interactions, which are essential if humans and robots are to coexist and collaborate effectively.

Authored by key psychology robotics researchers, the book limits its focus to specifically those robots who are intended to interact with people, including technology such as drones, self-driving cars, and humanoid robots. Forward-looking, the book examines robots not as the novelty they used to be, but rather the practical idea of robots participating in our everyday lives.

  • Explores how individual differences in cognitive abilities and personality influence human-robot interaction
  • Examines the human response to robot autonomy
  • Includes tools and methods for the measurement of social emotion with robots
  • Delves into a broad range of domains - military, caregiving, toys, surgery, and more
  • Anticipates the issues we will encountering with robots in the next ten years
  • Foreword by Maggie Jackson, author of Distracted
LanguageEnglish
Release dateNov 30, 2019
ISBN9780128156353
Living with Robots: Emerging Issues on the Psychological and Social Implications of Robotics

Related to Living with Robots

Related ebooks

Enterprise Applications For You

View More

Related articles

Reviews for Living with Robots

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Living with Robots - Richard Pak

    Living with Robots

    Emerging Issues on the Psychological and Social Implications of Robotics

    Editors

    Richard Pak

    Department of Psychology, Clemson University, Clemson, SC, United States

    Ewart J. de Visser

    Warfighter Effectiveness Research Center (WERC), U.S. Air Force Academy, Colorado Springs, CO, United States

    Ericka Rovira

    Department of Behavioral Sciences and Leadership, U.S. Military Academy, West Point, NY, United States

    Table of Contents

    Cover image

    Title page

    Copyright

    Contributors

    Foreword

    Chapter 1. Transparent interaction and human–robot collaboration for military operations

    Introduction

    Humans and robots in the military

    Autonomous robots and human–robot teamwork

    Human–robot teamwork

    Transparency

    Communication between humans and robots

    Human–robot communication in the future

    Conclusion

    Chapter 2. On the social perception of robots: measurement, moderation, and implications

    Previous research measuring social reactions to robots

    Goals in measuring social reactions to robots

    The Robot Social Attributes Scale

    Predictors and consequences of trait judgments of robots

    Future research

    Summary

    Chapter 3. Robotics to support aging in place

    Introduction

    Aging in place

    Emergence of robotic technology to support aging in place

    Robots and ADLs

    Robots and IADLs

    Robots and EADLs

    Older adults acceptance and adoption of robots to support aging in place

    Ethical considerations for robots to assist with aging in place

    Closing

    Chapter 4. Kill switch: The evolution of road rage in an increasingly AI car culture

    Automated driving technologies and car design in the near future

    Kill switch

    Road rage

    New road rage: home away from home

    Killer app

    Chapter 5. Development and current state of robotic surgery

    Introduction

    The hand of the surgeon

    Centuries of surgical innovation

    Emergence of surgical robots

    From the lab to the operating room— ZEUS versus da Vinci

    Growth, ongoing development, and current issues in robotic surgery

    On the horizon

    Chapter 6. Regulating safety-critical autonomous systems: past, present, and future perspectives

    Introduction

    The systems engineering V-model

    The FAA approach to regulation of new technologies

    The FDA approach to regulation of new medical devices

    The NHTSA approach to regulation of new technologies

    Lessons learned across aviation, automotive, and medical device regulation

    Point of first contact

    Automation surprises

    Regulating technologies vis-à-vis equivalence

    Conclusion

    Chapter 7. The role of consumer robots in our everyday lives

    What are companion robots?

    Human–robot interaction

    Applications of companion robots

    Elderly adults

    Perceptions of robots and conclusion

    Chapter 8. Principles of evacuation robots

    Introduction

    Five principles of evacuation robotics

    Toward evacuation robotics in practice

    Testing and evaluation of evacuation robots

    Conclusions

    Chapter 9. Humans interacting with intelligent machines: at the crossroads of symbiotic teamwork

    Introduction

    The basic level

    Principles of being—extraordinary human–robotic interaction

    History and approaches to human–robotic interaction

    Design and development of EHRI

    Beyond the horizon—potential applications of extraordinary human–robotic interaction

    Index

    Copyright

    Academic Press is an imprint of Elsevier

    125 London Wall, London EC2Y 5AS, United Kingdom

    525 B Street, Suite 1650, San Diego, CA 92101, United States

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom

    Copyright © 2020 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    ISBN: 978-0-12-815367-3

    For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Nikki Levy

    Acquisition Editor: Anita Koch

    Editorial Project Manager: Lindsay Lawrence

    Production Project Manager: Paul Prasad Chandramohan

    Cover Designer: Matthew Limbert

    Typeset by TNQ Technologies

    Contributors

    Jenay M. Beer,     Institute of Gerontology, University of Georgia, Athens, GA, United States

    David Britton,     Department of Mechanical Engineering and Materials Science and the Law School, Duke University, Durham, NC, USA

    Julie Carpenter,     Ethics + Emerging Sciences Group, California Polytechnic State University, San Luis Obispo, CA, United States

    Jessie Y.C. Chen,     U.S. Army Research Laboratory, Human Research and Engineering Directorate, Aberdeen Proving Ground, MD, United States

    M.L. Cummings,     Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA

    Shan G. Lakhmani,     U.S. Army Research Laboratory, Human Research and Engineering Directorate, Orlando, FL, United States

    Heather C. Lum,     Embry-riddle Aeronautical University, Daytona Beach, FL, United States

    Michael D. McNeese,     College of Information Sciences and Technology, Pennsylvania State University, University Park, PA, United States

    Nathaniel J. McNeese,     Clemson University, Clemson, SC, United States

    George Mois,     School of Social Work, University of Georgia, Athens, GA, United States

    Rana Pullatt,     Department of Surgery, Medical University of South Carolina, Charleston, SC, USA

    Steven J. Stroessner,     Barnard College, Columbia University, New York, NY, United States

    Alan R. Wagner,     Department of Aerospace Engineering and Rock Ethics Institute, Pennsylvania State University, University Park, PA, United States

    Benjamin L. White,     General Surgery, Medical University of South Carolina, Charleston, SC, USA

    Julia L. Wright,     U.S. Army Research Laboratory, Human Research and Engineering Directorate, Orlando, FL, United States

    Foreword

    Maggie Jackson

    A decade ago, I held hands with a robot. Developed at MIT Media Lab as a prototype domestic servant, Domo was legless and fused to a table but could speak, track faces, and gently grasp objects such as cups and plates (Jackson, 2018). On the day that I visited the Lab, I tried to touch Domo to see how it would react, and it promptly reached out its steely fingers and grasped my hand. I was enchanted.

    Now our robots are no longer rare creatures caged in laboratories. In 2018, global sales of service robots rose nearly 60 percent to 16.6 million robots worth $12.9 billion from the previous year (International Federation of Robotics, 2019). Bolstered by advances in AI and programmed to respond to us with emotion, they are increasingly becoming our teammates, tutors, and companions. And yet for all their rising complexity, it still takes very little on their part to win us over. If a robot cheats while playing a game with a human, it need only put a finger to its lips—offering a conspiratorial shhh!—to persuade the human not to report its transgression (Scassellati, 2018). Savvy technologists coo like children at the petting zoo when playing with social robots at electronics shows (Calo et al., 2011, p. 22). Soldiers mourn when their bomb-detection robot, which resembles little more than a souped-up toy truck, is destroyed (Hall, 2017).

    It almost does not matter what a robot looks like, we are willing to hug and touch them, talk to them, and befriend them. We quickly forget that it is, in the words of researcher Gill Pratt, like a hollow doll, with smarts that cannot match ours and no capacity to return our love or care (Metz, 2018, p. B3). Humanity has been longing since Biblical times to create autonomous creatures in its own image. Now the Grand Experiment has begun. Who will profit, who will benefit, and who may get hurt?

    Such questions carry a real urgency, as some of the most vulnerable members of human society are at the front lines of efforts to make robots a part of everyday life. As William Gibson noted, the future is already here. It's just not very evenly distributed (Gibson, 2018). Robots now comfort sick children in hospitals; tutor children with autism in social skills; and serve as companions, assistants, and therapy pets to older people (Jeong et al., 2015; Scassellati et al., 2018, Pedersen, Reid & Aspevig, 2018). The rapid aging of the world's population, in fact, is a main driver of the rise of the service robot industry (Pedersen et al., 2018). (I can imagine a time not far off when self-driving cars and drones are marketed as Grandma's friendly helpers.) Yet delegating some of the most intricate and challenging forms of human care to autonomous devices may wind up threatening the dignity and freedom of the very people that society is trying to help.

    Consider the case of Paro, the robot baby seal used in eldercare facilities since 2003, often with those who have cognitive impairments such as dementia (Turner, Personal Communication, Nov. 14, 2018). Although more robust research remains to be done, studies show that the furry creatures can lower stress, offer tactile stimulation, and stimulate patient's involvement with their environment (Mordoch et al., 2013). In one study in an Australian facility for the aged, residents with dementia reacted most strongly to Paro, their eyes sparkle, one recreational therapist reported (Birks et al., 2016, p. 3). But is it a fair yardstick of a robot's value to society if its success is measured by the reactions of those least able to choose how and when to use them? It may be both a victory and a defeat for humanity if a robot wins over those most easily deceived by the fiction of its care. Paro is marketed as a nonpharmacological intervention for depression, anxiety, and symptoms of dementia, yet classified by the USDA as a medical device, a point of confusion that further underscores how its effects and our intentions are as yet far from clear (Turner, Personal Communication, Nov. 14, 2018).

    Before we can understand who might benefit from robots, we must clarify what we want from these mechanical creatures, now and in future. Only recently have older people begun to be consulted in the design of robots designed for their use, a lapse that echoes the insufficient attention historically paid to technology users. And early findings reveal numerous disconnects between what many older people want, and what robots are designed to offer. Senior citizens are well aware that robotic companions are mostly built for those who are mentally frail, physically weak, and lonely, stereotypes of aging that many elderly belie and reject. In one recent US focus group study, most participants said they were willing to open their homes to a robot, but wanted one that might help augment their social lives, not position itself as their intimate friend (Lazar et al., 2016).

    Consumers and some roboticists further wonder if making robots with humanlike charm may make it easier for people to evade responsibility for one another. A majority of Americans say that they would not use a robot caregiver for themselves or a family member, and nearly 65% expect such devices to increase feelings of isolation in the elderly (Smith & Anderson, 2017, p. 4). When an older person who is sad or in pain smiles at a robot or eagerly anticipates its visit, it might be easier for a relative or friend to evade the difficult act of consoling them. I won't be visiting mum … on Thursday, could you please take [the robot] up to her?, a resident's daughter told a therapist at a facility using robotic companions (Birks et al., 2016, p. 4).

    The task of aiding the vulnerable in any society is deeply complex, but we must take care that in deputizing robots as our partners in this work, we do not wind up diminishing ourselves as humans. Designing devices that promote human flourishing, rather than simply remedying our assumed deficiencies, should be our aim. That might mean, for instance, creating robots that quiet when humans are interacting with one another, thereby ceding their charms to the emotional nourishing that we need most.

    To understand something fully we need not only proximity but also distance, the philosopher Walter Ong once wrote (Ong, 1982, 2002, p. 81). He was referring to the impact of writing on culture, yet his words can inspire us as we prepare to interact with robots each day. It is alluring to draw close to these creatures, yet we must guard our distance in order to gain perspective on them.

    We can do so firstly by remembering that technology's effects on life are a mix of augmentation and subtraction, of tensions and trade-offs, and unintended consequences. That is why it is crucial to keep looking beyond moments of easy enchantment to the wider issues raised by our relations with these machines: the unspoken values embedded in their design; their long-term effects on our notions of good care; the digital divides that may surface over time (forty-two percent of Americans think robot caregivers will only be used by those who cannot afford human help) (Smith & Anderson, 2017, p. 4). Going forward, we can heed a lesson long taught by technology: turning a device on marks only the beginning of its reach.

    Second, we can wisely integrate robots into society only by clearly recognizing the lines that still divide our species from our devices. In this realm, transparency is key, as some leading roboticists now argue (Scheutz, 2011). The routine practice in the field of calling a robot with a low battery in pain or referring to an inventor as a robot's caregiver (Lim, 2017) furthers the fallacy that such devices are human, a deception that can only muddy our efforts to discover the true limits and powers of both technology and humanity itself. On the Internet, nobody knows you're a dog, we once joked, celebrating the masquerade ball-flavor of the virtual. Yet as we have learned online, knowing who or what we are dealing with is crucial for fostering human autonomy in relationships and in thought.

    In future, we may be enchanted each and everyday by a robot, as I was once long ago. But let us endeavor not to get carried away. This book, with its deep and varied perspectives on living with some of humanity's most astonishing inventions, can help us answer one of the most crucial dilemmas confronting us today: when to let a robot take us by the hand, and when to let it go.

    References

    Birks M, et al. Robotic seals as therapeutic tools in an aged care facility: A qualitative study.  Journal of Aging Research . 2016;2016 doi: 10.1155/2016/8569602 Article ID 8569602, 7 pages.

    Calo C, et al. Ethical implications of using the Paro robot with a focus on dementia patient care. In:  Proceedings of the 12th AAAI conference on human-robot interaction in elder care . 2011:22. .

    Gibson W.  The Future is Already Here – it’s just not Very Evenly Distributed: Fluctuating Proximities and Clusters . 2018.

    Hall L. How we feel about robots that feel.  MIT Technology Review . 2017;120(6):75–78.

    International Federation of Robotics.  World Robotics 2019 Service Robots Report . 2019.

    Jackson M.  Distracted: Reclaiming our focus in a world of lost attention . 2nd ed. Amherst, NY: Prometheus Books; 2018 185–188, 212–213.

    Joeng S, et al. A social robot to mitigate stress, anxiety, and pain in hospital pediatric care. In:  Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts . March 2–5, 2015:103–104. doi: 10.1145/2701973.2702028.

    Lazar A, et al. Rethinking the design of robotic pets for older adults. In:  Proceedings of the 2016 ACM conference on designing interactive systems . June 4–8, 2016:1034–1046. doi: 10.1145/2901790.2901811.

    Lim A. Can you teach a Robot to Love? UC Berkeley’s Greater Good Magazine. 2017. https://ggsc.berkeley.edu/?_ga=2.36671584.1716233812.1574449319-298686721.1571768229.

    Metz C. Robots are improving quickly, but they can still be dumb.  The New York Times . October 1, 2018:B3.

    Mordoch E, et al. Use of social commitment robots in the care of elderly people with dementia: A literature review.  Maturitas . 2013;74(1):14–20. doi: 10.1016/j.maturitas.2012.10.015.

    Ong W. Writing restructures consciousness. In:  Orality and literacy . New York: Routledge; 1982, 2002:81.

    Pedersen I, Reid, Aspevig. Developing social robots for aging populations: A literature review of recent academic sources.  Sociology Compass . 2018;12:e12585. doi: 10.1111/soc412585 1–10.

    Scassellati, B. (November 13, 2018) Personal communication.

    Scassellati B, et al. Improving social skills in children with ASD using a long-term, in-home social robot.  Science Robotics . 2018;3(21):eaat7544. doi: 10.1126/scirobotics.aat7544.

    Scheutz M. The inherent dangers of unidirectional emotional bonds between humans and social robots,. In: Lin P, Bekey G, Abney K, eds.  Robot ethics: The ethical and social implications of robotics . Cambridge, MA: MIT Press; 2011:205–222.

    Smith A, Anderson M.  Automation in everyday life . New York: Pew Research Center; October 4, 2017.

    Turner, T. General Manager, PARO Robots US. Personal Communication, Nov. 14, 2018. Note: There are currently 300 Paro robots in use in the United States and about 5000 in the world.

    Further reading

    Gladstone, B. (Host). (October 22, 2018). The science in science fiction [radio program]. In Carline Watson (Executive Producer), Talk of the nation. Washington, D.C.: NPR.

    Chapter 1

    Transparent interaction and human–robot collaboration for military operations

    Shan G. Lakhmani ¹ , Julia L. Wright ¹ , and Jessie Y.C. Chen ²       ¹ U.S. Army Research Laboratory, Human Research and Engineering Directorate, Orlando, FL, United States      ² U.S. Army Research Laboratory, Human Research and Engineering Directorate, Aberdeen Proving Ground, MD, United States

    Abstract

    The future of robotics, envisioned by the US military, is made up of humans teamed with autonomous, intelligent robots. A shift in human–robot interaction (HRI), from the teleoperation of robotic systems to a more teamwork-oriented interaction, concurrently changes the informational requirements of the actors in the interaction. Consequently, military research into robotics is exploring what information fulfills those changed informational requirements, how robots can convey that information to human teammates, and what information do robots need to acquire from those human teammates. In this chapter, we discuss how the conception of a military robot is anticipated to change, how that change influences the interaction between humans and robots, and some of the different lines of research being done to support future HRI.

    Keywords

    Communication; Human–agent teaming; Human–robot interaction; Military; Transparency

    Introduction

    Humans and robots in the military

    Why robots?

    Teleoperation

    Supervisory control

    Mixed-initiative systems

    Autonomous systems

    Autonomous robots and human–robot teamwork

    The importance of human–autonomy interaction

    Human–autonomy interaction and allocation of responsibilities

    Working with a robot

    Human–robot teamwork

    Transparency

    Definition/history

    The loop—what it is and why we want to be in it

    Communication between humans and robots

    Communication modality

    Communication patterns

    Human–robot communication in the future

    Conclusion

    References

    Introduction

    The military's idea of what a robot is will change over the next decade. We are in a point of transition, where the future of military robotics lies in pursuit of autonomous teammates, rather than teleoperated tools. This shift in vision, however, changes the way soldiers and robots will interact. Rather than a human soldier having to complete many tasks—some in person, some via robot—to attain a goal, instead, the soldier and robot can share the taskload and fulfill the goal together. This more collaborationist approach, however, introduces the robot as an independent entity, which can act autonomously. Autonomy introduces a level of uncertainty that would not occur with a teleoperated robot. So, as we transition from teleoperated robots to autonomous, intelligent robots, we have to figure out the informational needs that must be established for this future human–robot relationship.

    The relationship between humans and robots is relatively unique, given that robots can be independent actors, but only in the ways they have been designed to act independently. While robots are not exactly human, human teamwork is an appropriate metaphor for human–robot interaction (HRI) (Morrow & Fiore, 2012). To accomplish a goal, members of a human team engage in two tracks of behavior. The first track is taskwork, which is the specific, work-related activities needed to accomplish the team's goals (Salas, Shuffler, Thayer, Bedwell, & Lazzara, 2015). The second track is teamwork, which includes coordination, sharing knowledge, and all the other actions needed for interdependent operation (Burke, Salas, Wilson-Donnelly, & Priest, 2004). Robots are already being designed to do taskwork. However, with the expected trend toward greater autonomous capabilities, robots must provide an analog to the teamwork behaviors that human team members perform. Team behaviors, such as communication and coordination, can be simulated by robots to support important facets of HRI, such as mutual predictability and shared knowledge (Demir et al., 2015; Sycara & Sukthankar, 2006).

    Transparent HRI can support mutual predictability and shared knowledge. Transparency has been described as an emergent property of the HRI process whereby the human operator has a clear and accurate understanding of how the robot gathers information, processes that information, and makes decisions (Ososky, Sanders, Jentsch, Hancock, & Chen, 2014; Phillips, Ososky, Grove, & Jentsch, 2011). Robot designers can facilitate transparent interactions by implementing elements in the interface that support understanding of the robot's decision-making process (Boyce, Chen, Selkowitz, & Lakhmani, 2015; Chen et al., 2014; Stowers et al., 2016).

    In this chapter, we will be exploring the trends of military robotics from teleoperation to autonomy and how that change influences both the human–robot relationship and the informational needs of a human–robot team. Furthermore, we will talk about the flow of information between humans and robots, the patterns between them, and the communication styles it can take. By grappling with these questions now, we prepare ourselves for a future where autonomous robots are more ubiquitous.

    Humans and robots in the military

    Why robots?

    Robots in the military serve many of the same purposes as they do in the private sector: they go where soldiers cannot, do things that soldiers cannot, and increase the soldiers' scope of influence. First and foremost, military robots keep both soldiers and civilians safe. Military robots replace soldiers in a variety of situations: clearing buildings, search and rescue in disaster areas and battlefields, detonation and disposal of explosives, reconnaissance and surveillance, etc. (Chen & Barnes, 2014; Murphy & Burke, 2010). They also augment soldier capabilities, such as gathering data to support soldiers' situation awareness, transporting soldier equipment, distributing supplies to soldiers in the most forward resupply positions, facilitating commanders' decision-making (collecting, organizing, and prioritizing data), and otherwise keeping soldier's safe by providing greater stand-off distance from the enemy for maneuvers and convoys (US Army, 2017). The development and advancement of autonomous robots is a key factor in the Department of Defense's Third Offset Strategy, which seeks to achieve and maintain a technological advantage over the United States' top adversaries (Eaglen, 2016). Currently, most robots fielded by the military are teleoperated.

    Teleoperation

    Teleoperation is when a human (i.e., teleoperator) can mechanically manipulate items or sense objects at a different location than where they are currently located, using a mechanical or robotic apparatus (Sheridan, 1995). It is important for even semiautonomous and fully autonomous robots to have a teleoperation mode, for those instances where its programming is insufficient to meet the environmental or task challenges at hand, and a human operator is needed for mission success. However, teleoperation presents unique challenges in supporting the human operators' situation awareness (Chen, Haas, & Barnes, 2007). Operators experience issues related to cognitive tunneling, decreased field of vision, degraded sense of spatial orientation, attention switching, and motion sickness. Many of these issues can be addressed by supporting the operator's sense of presence. The sense of presence can be increased through a variety of methods, such as multiple or operator-controlled views and multimodal feedback (Chen et al., 2007).

    Supervisory control

    Soon, it is expected that military applications of robots and unmanned systems will increase, and as such, humans will find themselves supervising increasingly large numbers of robotic assets. When an operator manages multiple robots by interacting with them individually, multiple performance decrements occur. As the number of robots being supervised increases, the operators' workload increases, their situation awareness decreases, their response times increase, the number of tasks that can be successfully completed within a designated time interval decreases, and the number of system failures and accidents increase ( Adams, 2009; Chen & Barnes, 2012; Chen, Durlach, Sloan, & Bowens, 2008; Squire & Parasuraman, 2010; Wang, Jamieson, & Hollands, 2009; Wang, Lewis, Velagapudi, Scerri, & Sycara, 2009). As a result, the current state of the art is a many-to-one supervision model, i.e., multiple humans are required to oversee a single robotic asset. As the complexity of the operating environment and/or robots’ task increase, so does the number of human supervisors required for operation (Murphy & Burke, 2010). However, this growth in human team size also creates issues for those supervisors—e.g., physical exposure to dangerous environments, distractions, and back seat driving—so the move from a many-to-one model to a one-to-many model is desired. Development of systems, such as the mixed-initiative system that can assist human operators to oversee teams of robots is the first step toward achieving this goal.

    Mixed-initiative systems

    Mixed-initiative systems incorporate elements of both adaptive automation—where the level of automation is changeable by the system (Parasuraman, Sheridan, & Wickens, 2000)—and adjustable automation—where the level of automation is changeable by an external operator or system (Bradshaw et al., 2003). Systems with these capabilities allow for mixed-initiative interactions between humans and robots, allowing them to work in concert, each with authority

    Enjoying the preview?
    Page 1 of 1