Home > White Papers > Human-Computer Interaction    s

Human-Computer Interaction

[The designers] had no intention of ignoring the human factor ... But the technological questions became so overwhelming that they commanded the most attention.
John Fuller
Death by Robot

There are a number of possible roles for human operators in an automated system. The operator may be a monitor for the automation. This task may not be possible, however. For example, if the process being controlled requires a reaction speed that a human operator cannot match, human supervision may be inappropriate. Humans are also dependent on the information provided by the human-computer interface, and that information about the process is indirect (filtered through the computer), so it may be more difficult for the human operator to monitor the system. If the failure of the system is silent or masked in some way, a human monitor will not detect the problem. Lastly, a job that involves little active behavior will lead to lower alertness and vigilance on the part of the operator. Automation that performs well day after day leads to a sense of complacency. Humans are not physically and mentally suited to long stretches of monotonous vigilance.

Humans may also be used as backup to auomation, but this may lead to lower proficiency of the human operators. Skills that are not often used tend to be forgotten. Additionally, humans lose confidence in skills they do not exercise, so operators may become hesitant to intervene even when they should take over for failing automation. There may also be an effect on the designers of the system. If the designers know that a human will backup the system, they may not take as many steps to make the system as robust as it should be.

Another option is to make the automation a partner to the human operator. The danger is that the operator will be left with a collection of miscellaneous tasks that didn't fit well with the technology choices of the automation. Typically, the human is left doing whatever is the hard part of the job, while the automation accomplishes the easy tasks. The problem is that the hard tasks may become even harder for the human worker than in an unautomated job. Some of the context of the problem may be taken away by the automation, leaving the human with fewer tools to make good decisions.

The simplest solution to human-machine-interaction (HMI) design is to automate as much as possible. But this is not the best solution. There are conflicting design qualities that are desirable in a human-machine interface, and these must be carefully considered. Norman claims that appropriate design should assume the existence of error, continually provide feedback, continually interact with operators in an effective manner, and allow for the worst situation possible. A figure showing an HMI design process is shown below.

Systems must be tailored to match human requirements rather than vice versa. Changing a system design is easy compared to changing the "design" of a human operator. Safe systems must be designed to withstand normal, expected human behavior. Operators are intelligent and inquisitive; they form mental models about how the system functions and will find opportunities to test those mental models. Interfaces must also be designed to combat lack of alertness. Interfaces must exhibit error tolerance. Operators should be able to monitor the results of their actions and recover from their own errors. Humans are generally good at detecting and correcting their own errors, but feedback paths and appropriate controls must be provided through the human-machine interface.

Once tasks have been allocated, steps can be taken to reduce the likelihood of human error. Safety enhancement should be easy, natural, and difficult to omit or do wrong. For example, stopping an unsafe action or leaving an unsafe state should take one keystroke. Dangerous actions should be difficult or impossible. For example, potentially dangerous actions should require two or more unique commands. The interface should include information necessary for making decisions.

Each task that the operator performs should be analyzed to determine what information is needed to accomplish the task. This information should be provided by the human-computer interface. The automation should provide feedback about the operator's actions to help the operator detect human error. The system should also provide feedback about the state of the system to update operator mental models and help the operator detect system faults. The system should also provide for the failure of computer displays by providing alternative sources of information. It is very important that instrumentation to deal with malfunctions must not be disabled by these malfunctions.

In addition to providing the operator with information about the controlled process, the automation should provide status about itself, the actions it has taken, and the current system state. Degradation in performance of the automation should be made obvious to the operator.

Alarms can be a valuable part of the HMI, but they must be used carefully. It is possible for operators to be overwhelmed by too many alarms. The beginning of an incident or accident is the wrong time for the operator to have to use valuable reaction time figuring out how to shut off all the loud and obnoxious alarms in order to be able to think clearly. Alarms, if they are too sensitive and go off too often, can also provoke a response of incredulity. Operators may have a tendency to believe that the alarm is malfunctioning rather than the system being monitored. Lastly, alarms can provoke a routine of relying on the alarms as a primary safety system rather than a backup one.

To use alarms well in a system, design the alarms to minimize spurious triggering. Provide checks to distinguish correct from faulty instruments. Also provide checks on the alarm system itself to help keep the alarms credible in the opinion of the operators. Make a distinction between routine and critical alarms so that responses can be prioritized in an emergency. Always indicate what condition is responsible for an alarm. Provide temporal information about events and state changes. Alarms tend to indicate hazardous states in the system; the controlled process may, once out of control, damage other sensors or trip other alarms. Providing sequencing information helps to diagnose the cause and effect of events and determine what is happening. When necessary, corrective action must be required of the operator.

The skill required to operate a process increases as automation is added. Often, new users of automated systems are surprised by a greater need for training and skill-building in operators after automation is added. In addition to the process, operators must understand how the software works. Operators must also be taught about safety features and their design rationale (so that they do not tamper with or circumvent the safety features). Because automation introduces a layer of indirection between the operator and the process, operators must be taught general strategies rather than specific responses.

Another potential problem in HCI is mode confusion. Mode confusion is a general term for a class of situation-awareness problems. High tech automation is changing the cognitive demands on operators. Operators are now supervising processes rather than directly controlling them. The decision-making is more complex, made more so by complicated, mode-rich systems. There is an increased need for cooperation and communication between the system and the operator. Human-factors experts have complained about technology-centered automation. Designers focus on technical issues, not on supporting operator tasks. This leads to "clumsy" automation.

The type of errors made by operators are changing as well. Errors used to be errors of comission; the operator had to do something wrong. With increased automation, the errors are of omission; the operator was expected to perform a function and failed to do so.

Early automated systems had a fairly small number of modes. The automation provided a passive background. The operator would act on that background by entering target data and requesting system operations. Automated systems had only one overall mode setting for each function performed. Indications of currently active mode and transitions between modes could be dedicated to one location on the display. At that time, the consequences of mode awareness breakdown were fairly small. Operators could quickly detect and recover from erroneous actions.

The flexibility of advanced automation allows designers to develop more complicated, mode-rich systems. The result was numerous mode indications spread over multiple displays. Each display contained just a portion of mode status data corresponding to a particular subsystem. More complicated designs also allow for interactions across modes. The increased capabilities of automation create increased delays between user input and feedback about system behavior. These changes have led to increased difficult of error or failure detection and recovery. There are now challenges to human ability to maintain awareness of active modes, armed modes, interactions between environmental status and mode behavior, and interactions across modes.

Mode confusion analysis helps identify predictable error forms. The idea is to identify patterns of mode interactions that are likely to cause human operators to lose mode awareness. These predictable error forms are derived from studies of accidents and incidents and simulator studies of operators. First, one models the blackbox behavior of the software, then analysts can identify the modeled software behavior that is likely to lead to operator error. Several steps can be taken to reduce the probability of error occurring. The automation can be redesigned. A more appropriate human-computer interaction can be designed. And lastly, operational procedures and training can be changed.

Here are a few examples of design flaws that occur in human-computer interaction.

  1. Interface interpretation errors
    bulletSoftware incorrectly interprets input
    bulletMultiple conditions mapped to same output

    Mulhouse (A320):
    Crew directed the automated system to fly in TRACK/FLIGHT PATH mode, which is a combined mode related both to lateral (TRACK) and vertical (flight path angle) navigation. When they were given radar vectors by the air traffic controller, they may have switched from the TRACK to the 
    HDG SEL mode to be able to enter the heading requested by the controller. However, pushing 
    the button to change the lateral mode also automatically changes the vertical mode from FLIGHT PATH ANGLE to VERTICAL SPEED, i.e., the mode switch button affects both lateral and 
    vertical navigation. When the pilots subsequently entered "33" to select the desired flight path 
    angle of 3.3 degrees, the automation interpreted "33" to select the desired vertical speed of 
    3300 ft. Pilots were not aware of active "interface mode" and failed to detect the problem. As 
    a consequence of too steep a descent, the aircraft crashed into a mountain.
     
    Operating room medical device
    The device has two operating modes: warmup and normal. It starts in warmup mode whenever 
    either of two particular settings are adjusted by the operator (anesthesiologist). The meaning 
    of alarm messages and the effect of controls are different in these two modes, but neither the 
    current device operating mode nor a change in mode are indicated to the operator. In addition, 
    four distinct alarm-triggering conditions are mapped onto two alarm messages so that the same message has different meanings depending on the operating mode. In order to understand what internal condition triggered the message, the operator must infer which malfunction is being indicated by the alarm.
     
    Display modes
    In some devices, user-entered target values are interpreted differently depending on the active display mode.
     
  2. Inconsistent behavior
    bulletInconsistent behavior makes it harder for operator to learn how automation works
    bulletTo compensate, pilots are changing their scanning behavior. Pilots used to be taught to scan the rows of status indicators in sequence repeatedly during flight. If any alarm tripped or other status indicator lit, the pilot would notice and take appropriate action. With increasing automation, pilot attention is focused more where it is drawn, at any given moment, by the interface design.

    A320 simulator study
    In a go-around below 100 feet, the pilots failed to anticipate or to realize that the autothrust system did not arm when they selected TOGA power. It did so under all other circumstances where TOGA power is applied (found in simulator study of A320).
     
    Bangalore (A320)
    A protection function is provided in all automation configurations except the ALTITUDE ACQUISITION mode in which autopilot was operating.
     
  3. Indirect mode changes
    bulletIndirect mode changes occur when the automation changes mode without any direct command    from the operator.
    bulletIndirect mode changes can also take place when activating one mode can activate additional modes depending on the system's state at time of manipulation.

    Bangalore (A320)

    The pilot put the plane into OPEN DESCENT mode without realizing it. This change resulted in the aircraft's speed being controlled by pitch rather than thrust. The throttles went to idle. In that mode, the automation ignores any preprogrammed altitude constraints. To maintain the pilot-selected speed without power, the automation had to use an excessive rate of descent, which led to a crash short of runway.

    How could this happen? There are three ways to activate OPEN DESCENT mode:

    1. Pull the altitude knob after selecting a lower altitude.
    2. Pull the speed knob when the aircraft is in EXPEDITE mode.
    3. Select a lower altitude while in ALTITUDE ACQUISITION mode.

    The pilot must not have been aware that aircraft was within 200 feet of its previously entered target altitude (which puts the automation into ALTITUDE ACQUISITION mode). Thus, the pilot may not have expected selection of a lower altitude to result in a mode transition. Not expecting the mode switch, the pilot may not have closely monitored his mode annunciations. The crew discovered what happened at 10 seconds before impact -- too late to recover with engines at idle.

  4. Operator authority limits
    bulletAuthority limits prevent actions that would lead to hazardous states.
    bulletThe same limits may prohibit maneuvers needed in extreme situations.

    A320 approach
    During one A320 approach, the pilot disconnected the autopilot while leaving the flight director engaged. Under these conditions, the automation provides automatic speed protection by preventing the aircraft from exceeding upper and lower airspeed limits. At some point during the approach, after flaps 20 had been selected, the aircraft exceeded the airspeed limit for that configuration by 2 kts. As a result, the automation intervened by pitching the aircraft up to reduce airspeed back to 195 kts. The pilots, who were unaware that automatic speed protection was active, observed the commanded behavior. Concerned about the unexpected reduction in airspeed at this critical phase of flight, they rapidly increased thrust to counterbalance the automation. As a consequence of this sudden burst of power, the airplane pitched up to about 50 degrees, entered a sharp left bank, and went into a dive. The pilots eventually disengaged the autothrust system and its associated protection function and regained control of the aircraft.
     
  5. Unintended side effects
    bulletAn action intended to have one effect may have an additional one.

    A320 simulator study
    Because approach is such a busy time and automation requires so much heads down work, pilots often program the automation as soon as they are assigned a runway. In an A320 simulator study, it was discovered that pilots were not aware that entering a runway change AFTER entering the data for the assigned approach results in the deletion of all previously entered altitude and speed constraints even though they may still apply.
     
  6. Lack of appropriate feedback
    bulletOperators need feedback to predict or anticipate mode changes.
    bulletIndependent information is needed to detect computer errors.

    Bangalore (A320)
    The PF (pilot flying) had disengaged his flight director during approach and was assuming PNF (pilot not flying) would do the same. The result would have been a mode configuration in which airspeed is automatically controlled by the autothrottle (the SPEED mode), which is the recommended procedure for the approach phase. However, the PNF never turned off his flight director, and the OPEN DESCENT mode became active when a lower altitude was selected. This indirect mode change led to the hazardous state and eventually the accident. But a complicating factor was that each pilot only received an indication of the status of his own flight director and not all the information necessary to determine whether the desired mode would be engaged. The lack of feedback of knowledge of the complete system state contributed to the pilots not detecting the unsafe state in time to reverse it.

.

Home Products Services Publications White Papers About Us

Copyright © 2003 - 2016 Safeware Engineering Corporation. All rights reserved