Objective
Identify a critical research gap for the human factors community that has implications for successful human–automation teaming.
Background
There are a variety of approaches for applying ...automation in systems. Flexible application of automation such that its level and/or type changes during system operations has been shown to enhance human–automation system performance.
Method
This mini-review describes flexible automation in which the level of automated support varies across tasks during system operation, rather than remaining fixed. Two types distinguish the locus of authority to change automation level: adaptable automation (the human operator assigns how automation is applied) has been found to aid human’s situation awareness and provide more perceived control versus adaptive automation (the system assigns automation level) that may impose less workload and attentional demands by automatically adjusting levels in response to changes in one or more states of the human, task, environment, and so on.
Results
In contrast to vast investments in adaptive automation approaches, limited research has been devoted to adaptable automation. Experiments directly comparing adaptable and adaptive automation are particularly scant. These few studies show that adaptable automation was not only preferred over adaptive automation, but it also resulted in improved task performance and, notably, less perceived workload.
Conclusion
Systematic research examining adaptable automation is overdue, including hybrid approaches with adaptive automation. Specific recommendations for further research are provided.
Application
Adaptable automation together with effective human-factored interface designs to establish working agreements are key to enabling human–automation teaming in future complex systems.
The current cognitive engineering literature includes a broad range of models of human–automation interaction (HAI) in complex systems. Some of these models characterize types and levels of ...automation (LOAs) and relate different LOAs to implications for human performance, workload, and situation awareness as bases for systems design. However, some have suggested that the LOAs approach has overlooked key issues that need to be considered during the design process. Others are simply unsatisfied with the current state of the art in modeling HAI. In this paper, I argue that abandoning an existing framework with some utility for design makes little sense unless the cognitive engineering community can provide the broader design community with other sound alternatives. On this basis, I summarize issues with existing definitions of LOAs, including (a) presumptions of human behavior with automation and (b) imprecision in defining behavioral constructs for assessment of automation. I propose steps for advances in LOA frameworks. I provide evidence of the need for precision in defining behavior in use of automation as well as a need for descriptive models of human performance with LOAs. I also provide a survey of other classes of HAI models, offering insights into ways to achieve descriptive formulations of taxonomies of LOAs to support conceptual and detailed systems design. The ultimate objective of this line of research is reliable models for predicting human and system performance to serve as a basis for design.
The recent proliferation of sensing and computing technologies has promoted the rapid development of automated driving. A series of automated driving systems are consecutively released by ...manufactures in recent years. These systems at different automated driving levels rely on various perception sensors and vehicle computing system to detect the environment and process the real-time data. The configuration of environmental sensing and vehicle computing in the system design stage for facilitating automated driving are attracting more and more attention. In this article, we investigate the state-of-the-art configuration design schemes of a perception system and a computing system in automated driving at different levels of automation, involving up to 30 automated driving systems or vehicles. A comprehensive analysis by statistics approaches is followed with the emphasis of underlying design patterns and the cross relationships among environmental sensing, vehicle computing, and automated driving levels. Furthermore, from a general perspective, we analyze and conclude the requirement on the perception system and the computing system in automated vehicles, considering typical features in each automated driving level. We expect that this work could serve as a reference for practical engineering on the configuration design for automated driving systems, and promote academic research on relative theories and methodologies for system design.
Human–robot collaboration (HRC) is characterized by a spatiotemporal overlap between the workspaces of the human and the robot and has become a viable option in manufacturing and other industries. ...However, for companies considering employing HRC it remains unclear how best to configure such a setup, because empirical evidence on human factors requirements remains inconclusive. As robots execute movements at high levels of automation, they adapt their speed and movement path to situational demands. This study therefore experimentally investigated the effects of movement speed and path predictability of an industrial collaborating robot on the human operator. Participants completed tasks together with a robot in an industrial workplace simulated in virtual reality. A lower level of predictability was associated with a loss in task performance, while faster movements resulted in higher‐rated values for task load and anxiety, indicating demands on the operator exceeding the optimum. Implications for productivity and safety and possible advancements in HRC workplaces are discussed.
Two experiments are reported that investigate to what extent performance consequences of automated aids are dependent on the distribution of functions between human and automation and on the ...experience an operator has with an aid. In the first experiment, performance consequences of three automated aids for the support of a supervisory control task were compared. Aids differed in degree of automation (DOA). Compared with a manual control condition, primary and secondary task performance improved and subjective workload decreased with automation support, with effects dependent on DOA. Performance costs include return-to-manual performance issues that emerged for the most highly automated aid and effects of complacency and automation bias, respectively, which emerged independent of DOA. The second experiment specifically addresses how automation bias develops over time and how this development is affected by prior experience with the system. Results show that automation failures entail stronger effects than positive experience (reliably working aid). Furthermore, results suggest that commission errors in interaction with automated aids can depend on three sorts of automation bias effects: (a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information analog to a “looking-but-not-seeing” effect.
It is important to encourage older adults to remain active when interacting with assistive robots. This study proposes a schematic model for integrating levels of automation (LOAs) and transparency ...(LoTs) in assistive robots to match the preferences and expectations of older adults. Metrics to evaluate LOA and LoT design combinations are defined. We develop two distinctive test cases to examine interaction design considerations for robots working for this population in everyday tasks: a person-following task with a mobile robot and a table-setting task with a robot manipulator. Evaluations in user studies with older adults reveal that LOA and LoT combinations influence interaction elements. Low LOA and high LoT encouraged activity engagement while receiving adequate information regarding the robot's behavior. The variety of objective and subjective metrics is essential to provide a holistic framework for evaluating the interaction.
The implementation of automation in many domains has led to well-documented accidents and incidents, resulting from reduced situation awareness that occurs when operators are out-of-loop (OOTL), ...automation confusion, and automation interaction difficulties. Wickens coined the term lumberjack effect to summarize the finding that while automation works well most of the time in typical or normal situations, the performance problems that occur in novel or unexpected situations also increase the likelihood of catastrophic errors. Skraaning and Jamieson have criticized the lumberjack effect due to a study in which they failed to find it. I show that this claim is unsupported due to a number of methodological limitations in their study and conceptual errors. They also provide a model of automation failure that fails to clearly delineate the many barriers to accidents that are available, instead emphasizing the ways in which automation can fail technically and different types of human error. An alternate automation failure model is presented that provides a broader socio-technical perspective emphasizing the design features, processes, capabilities, organizational policies, and training that support people in improving system safety when automation fails.
In this article I describe the origins of the stages and levels of the automation concept and present the taxonomy, model, and theories underlying this concept. I then show how both simplifications ...and elaborations of the resulting tradeoff model of degree of automation can address some of Kaber’s concerns about its utility in design.
In the near future, vehicles will gradually gain more autonomous functionalities. Drivers' activity will be less about driving than about monitoring intelligent systems to which driving action will ...be delegated. Road safety, therefore, remains dependent on the human factor and we should identify the limits beyond which driver's functional state (DFS) may no longer be able to ensure safety. Depending on the level of automation, estimating the DFS may have different targets, e.g., assessing driver's situation awareness in lower levels of automation and his ability to respond to emerging hazard or assessing driver's ability to monitor the vehicle performing operational tasks in higher levels of automation. Unfitted DFS (e.g., drowsiness) may impact the driver ability respond to taking over abilities. This paper reviews the most appropriate psychophysiological indices in naturalistic driving while considering the DFS through exogenous sensors, providing the more efficient trade-off between reliability and intrusiveness. The DFS also originates from kinematic data of the vehicle, thus providing information that indirectly relates to drivers behavior. The whole data should be synchronously processed, providing a diagnosis on the DFS, and bringing it to the attention of the decision maker in real time. Next, making the information available can be permanent or intermittent (or even undelivered), and may also depend on the automation level. Such interface can include recommendations for decision support or simply give neutral instruction. Mapping of relevant psychophysiological and behavioral indicators for DFS will enable practitioners and researchers provide reliable estimates, fitted to the level of automation.
Objective
The aim was to evaluate the relevance of the critique offered by Jamieson and Skraaning (2019) regarding the applicability of the lumberjack effect of human–automation interaction to ...complex real-world settings.
Background
The lumberjack effect, based upon a meta-analysis, identifies the consequences of a higher degree of automation—to improve performance and reduce workload—when automation functions as intended, but to degrade performance more, as mediated by a loss of situation awareness (SA) when automation fails. Jamieson and Skraaning provide data from a process control scenario that they assert contradicts the effect.
Approach
We analyzed key aspects of their simulation, measures, and results which we argue limit the strength of their conclusion that the lumberjack effect is not applicable to complex real-world systems.
Results
Our analysis revealed limits in their inappropriate choice of automation, the lack of a routine performance measure, support for the lumberjack effect that was actually provided by subjective measures of the operators, an inappropriate assessment of SA, and a possible limitation of statistical power.
Conclusion
We regard these limitations as reasons to temper the strong conclusions drawn by the authors, of no applicability of the lumberjack effect to complex environments. Their findings should be used as an impetus for conducting further research on human–automation interaction in these domains.
Applications
The collective findings of both Jamieson and Skraaning and our study are applicable to system designers and users in deciding upon the appropriate level of automation to deploy.