Much has been written about levels of automation (LOA), but comparatively little has been written about levels of digitization (LODi) and levels of digitalization (LODa). Digitization is a digital ...representation of analog information and is typical of migration from analog to digital control systems, digitalization involves enhancing the functionality of digital information, and automation changes control from humans to machines. Each of these technology implementations has its own scales, and each forms a viable type of functionality that should be considered not as a continuum toward automation but rather as separate categories of solutions that meet the needs of advanced reactors. In this paper we develop separate LODi, LODa, and LOA scales and demonstrate how conflation of these technologies, using the example of computer-based procedures, can lead to confusion in the design process. With the race to develop advanced reactors, the surest metric of success and safety is proper consideration of the right technology requirements for different control systems.
The growth of sophistication in machine capabilities must go hand in hand with growth of sophistication in human–machine interaction capabilities. To continue advancement as we build today’s ...intelligent machines, designers need formative tools for creating sociotechnical systems. In this article, we will briefly assess the appropriateness of “levels of automation” as a tool for designing human–machine systems. Additionally, we present coactive design and interdependence analysis as a viable alternative tool moving forward into more advanced and sophisticated human–machine systems.
Skraaning and Jamieson (2023) remind us that automation failures are rarely the result of failures of individual components of the automated system itself but rather from unanticipated interactions ...within the larger system of which the automation is a part. Their preliminary taxonomy of automation failure mechanisms offers an important contribution to the literature. This commentary further expands the range of systemic automation failure mechanisms that researchers, designers, and evaluators need to be aware of and attempt to guard against—be it through analysis, design, or evaluation. The examples come from the Fukushima Nuclear Accident, automated vehicle accidents, as well as new failure mechanisms that arise from emerging machine learning (ML) technologies.
This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and ...facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of human-centred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human-machine system performance. The combined effect of LOA and AA on all measures did not appear to be 'additive' in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human-machine system performance. This work provides additional information for a developing theory of human-centred automation.
Objective
This article is a response to Wickens et al.’s (2019) critique of Jamieson and Skraaning (2019).
Background
Wickens et al. (2019) offer a five-point critique of Jamieson and Skraaning ...(2019) that they claim tempers the strength of our conclusions.
Approach
We first correct a misrepresentation in the critique and then respond to each of the criticisms.
Results
We preserve the strength of our skeptical conclusions about the applicability of the lumberjack model to complex work settings.
Applications
We continue to caution system designers about the lack of evidence supporting the lumberjack model in the context of complex work systems.
This paper responds to Kaber’s reflections on the empirical grounding and design utility of the levels-of-automation (LOA) framework. We discuss the suitability of the existing human performance data ...for supporting design decisions in complex work environments. We question why human factors design guidance seems wedded to a model of questionable predictive value. We challenge the belief that LOA frameworks offer useful input to the design and operation of highly automated systems. Finally, we seek to expand the design space for human–automation interaction beyond the familiar human factors constructs. Taken together, our positions paint LOA frameworks as abstractions suffering a crisis of confidence that Kaber’s remedies cannot restore.
•The level of automation is one of the central issues in designing control systems.•Occupant attitudes towards different levels of automation were studied.•The interviews revealed a large amount of ...mistrust towards automation.•System characteristics that may improve trust are listed.•Full automation is not suitable for the control of the indoor environment.
The level of automation is one of the central issues in designing control systems. Occupant attitudes towards different levels of automation in domestic control systems were studied using a qualitative interview method. The following systems were considered: (1) control of indoor thermal environment, (2) peak load management, and (3) own energy production. For each system, four solutions representing different levels of automation were created. The interviewees gave comments on the solutions and chose the alternatives they preferred. The results show that decisions on the level of automation should be made carefully, taking account of the special qualities of each system without neglecting the individual differences between users. Full automation is not suitable for systems that considerably affect indoor environmental comfort. The interviews revealed a large amount of mistrust towards automation. An important question is how to improve the level of trust between the occupants and automation, i.e. how to make the occupants trust the automation in cases where better results would be gained through the utilisation of automation. The following system characteristics may potentially improve the level of trust: (1) carefully chosen level of automation, (2) predictability, transparency and feedback, (3) simplicity and usability and (4) suitability for everyday life.
The concept of different levels of automation (LOAs) has been pervasive in the automation literature since its introduction by Sheridan and Verplanck. LOA taxonomies have been very useful in guiding ...understanding of how automation affects human cognition and performance, with several practical and theoretical benefits. Over the past several decades a wide body of research has been conducted on the impact of various LOAs on human performance, workload, and situation awareness (SA). LOA has a significant effect on operator SA and level of engagement that helps to ameliorate out-of-the-loop performance problems. Together with other aspects of system design, including adaptive automation, granularity of control, and automation interface design, LOA is a fundamental design characteristic that determines the ability of operators to provide effective oversight and interaction with system autonomy. LOA research provides a solid foundation for guiding the creation of effective human–automation interaction, which is critical for the wide range of autonomous and semiautonomous systems currently being developed across many industries.
Around 90% of accidents stem from human error. Disruptive technology, especially automated vehicles (AVs), can respond to the problems by, for instance, eradicating human error when driving, thus ...increasing energy efficiency due to the platoon effect, and potentially giving more space to human activities by decreasing parking space; hence, with the introduction of the autonomous vehicle, the public attitude towards its adoption needs to be understood to develop appropriate strategies and policies to leverage the potential benefits. There is a lack of a systematic and comprehensive literature review on adoption attitudes toward AVs that considers various interlinked factors such as road traffic environment changes, AV transition, and policy impacts. This study aims to synthesize past research regarding public acceptance attitude toward AVs. More specifically, the study investigates driverless technology and uncertainty, road traffic environment changes, policy impact, and findings from AV adoption modelling approaches, to understand public attitudes towards AVs. The study points out critical problems and future directions for analysis of AV impacts, such as the uncertainty on AVs adoption experiment, policy implementation and action plans, the uncertainty of AV-related infrastructure, and demand modelling.