[The designers] had no intention of ignoring the human factor ... But the technological questions became so overwhelming that they commanded the most attention.John Fuller
There are a number of possible roles for human operators in an automated system. The operator may be a monitor for the automation. This task may not be possible, however. For example, if the process being controlled requires a reaction speed that a human operator cannot match, human supervision may be inappropriate. Humans are also dependent on the information provided by the human-computer interface, and that information about the process is indirect (filtered through the computer), so it may be more difficult for the human operator to monitor the system. If the failure of the system is silent or masked in some way, a human monitor will not detect the problem. Lastly, a job that involves little active behavior will lead to lower alertness and vigilance on the part of the operator. Automation that performs well day after day leads to a sense of complacency. Humans are not physically and mentally suited to long stretches of monotonous vigilance.
Humans may also be used as backup to auomation, but this may lead to lower proficiency of the human operators. Skills that are not often used tend to be forgotten. Additionally, humans lose confidence in skills they do not exercise, so operators may become hesitant to intervene even when they should take over for failing automation. There may also be an effect on the designers of the system. If the designers know that a human will backup the system, they may not take as many steps to make the system as robust as it should be.
Another option is to make the automation a partner to the human operator. The danger is that the operator will be left with a collection of miscellaneous tasks that didn't fit well with the technology choices of the automation. Typically, the human is left doing whatever is the hard part of the job, while the automation accomplishes the easy tasks. The problem is that the hard tasks may become even harder for the human worker than in an unautomated job. Some of the context of the problem may be taken away by the automation, leaving the human with fewer tools to make good decisions.
The simplest solution to human-machine-interaction (HMI) design is to automate as much as possible. But this is not the best solution. There are conflicting design qualities that are desirable in a human-machine interface, and these must be carefully considered. Norman claims that appropriate design should assume the existence of error, continually provide feedback, continually interact with operators in an effective manner, and allow for the worst situation possible. A figure showing an HMI design process is shown below.
Systems must be tailored to match human requirements rather than vice versa. Changing a system design is easy compared to changing the "design" of a human operator. Safe systems must be designed to withstand normal, expected human behavior. Operators are intelligent and inquisitive; they form mental models about how the system functions and will find opportunities to test those mental models. Interfaces must also be designed to combat lack of alertness. Interfaces must exhibit error tolerance. Operators should be able to monitor the results of their actions and recover from their own errors. Humans are generally good at detecting and correcting their own errors, but feedback paths and appropriate controls must be provided through the human-machine interface.
Once tasks have been allocated, steps can be taken to reduce the likelihood of human error. Safety enhancement should be easy, natural, and difficult to omit or do wrong. For example, stopping an unsafe action or leaving an unsafe state should take one keystroke. Dangerous actions should be difficult or impossible. For example, potentially dangerous actions should require two or more unique commands. The interface should include information necessary for making decisions.
Each task that the operator performs should be analyzed to determine what information is needed to accomplish the task. This information should be provided by the human-computer interface. The automation should provide feedback about the operator's actions to help the operator detect human error. The system should also provide feedback about the state of the system to update operator mental models and help the operator detect system faults. The system should also provide for the failure of computer displays by providing alternative sources of information. It is very important that instrumentation to deal with malfunctions must not be disabled by these malfunctions.
In addition to providing the operator with information about the controlled process, the automation should provide status about itself, the actions it has taken, and the current system state. Degradation in performance of the automation should be made obvious to the operator.
Alarms can be a valuable part of the HMI, but they must be used carefully. It is possible for operators to be overwhelmed by too many alarms. The beginning of an incident or accident is the wrong time for the operator to have to use valuable reaction time figuring out how to shut off all the loud and obnoxious alarms in order to be able to think clearly. Alarms, if they are too sensitive and go off too often, can also provoke a response of incredulity. Operators may have a tendency to believe that the alarm is malfunctioning rather than the system being monitored. Lastly, alarms can provoke a routine of relying on the alarms as a primary safety system rather than a backup one.
To use alarms well in a system, design the alarms to minimize spurious triggering. Provide checks to distinguish correct from faulty instruments. Also provide checks on the alarm system itself to help keep the alarms credible in the opinion of the operators. Make a distinction between routine and critical alarms so that responses can be prioritized in an emergency. Always indicate what condition is responsible for an alarm. Provide temporal information about events and state changes. Alarms tend to indicate hazardous states in the system; the controlled process may, once out of control, damage other sensors or trip other alarms. Providing sequencing information helps to diagnose the cause and effect of events and determine what is happening. When necessary, corrective action must be required of the operator.
The skill required to operate a process increases as automation is added. Often, new users of automated systems are surprised by a greater need for training and skill-building in operators after automation is added. In addition to the process, operators must understand how the software works. Operators must also be taught about safety features and their design rationale (so that they do not tamper with or circumvent the safety features). Because automation introduces a layer of indirection between the operator and the process, operators must be taught general strategies rather than specific responses.
Another potential problem in HCI is mode confusion. Mode confusion is a general term for a class of situation-awareness problems. High tech automation is changing the cognitive demands on operators. Operators are now supervising processes rather than directly controlling them. The decision-making is more complex, made more so by complicated, mode-rich systems. There is an increased need for cooperation and communication between the system and the operator. Human-factors experts have complained about technology-centered automation. Designers focus on technical issues, not on supporting operator tasks. This leads to "clumsy" automation.
The type of errors made by operators are changing as well. Errors used to be errors of comission; the operator had to do something wrong. With increased automation, the errors are of omission; the operator was expected to perform a function and failed to do so.
Early automated systems had a fairly small number of modes. The automation provided a passive background. The operator would act on that background by entering target data and requesting system operations. Automated systems had only one overall mode setting for each function performed. Indications of currently active mode and transitions between modes could be dedicated to one location on the display. At that time, the consequences of mode awareness breakdown were fairly small. Operators could quickly detect and recover from erroneous actions.
The flexibility of advanced automation allows designers to develop more complicated, mode-rich systems. The result was numerous mode indications spread over multiple displays. Each display contained just a portion of mode status data corresponding to a particular subsystem. More complicated designs also allow for interactions across modes. The increased capabilities of automation create increased delays between user input and feedback about system behavior. These changes have led to increased difficult of error or failure detection and recovery. There are now challenges to human ability to maintain awareness of active modes, armed modes, interactions between environmental status and mode behavior, and interactions across modes.
Mode confusion analysis helps identify predictable error forms. The idea is to identify patterns of mode interactions that are likely to cause human operators to lose mode awareness. These predictable error forms are derived from studies of accidents and incidents and simulator studies of operators. First, one models the blackbox behavior of the software, then analysts can identify the modeled software behavior that is likely to lead to operator error. Several steps can be taken to reduce the probability of error occurring. The automation can be redesigned. A more appropriate human-computer interaction can be designed. And lastly, operational procedures and training can be changed.
Here are a few examples of design flaws that occur in human-computer interaction.
Copyright © 2003 Safeware Engineering Corporation. All rights reserved