Hal THE HAL PAPERS
ORIGINAL WRITINGS
 

PAPER 3:

WHEN COMPUTERS SAY IT WITH FEELING

COMMUNICATION AND SYNTHETIC EMOTIONS

IN

STANLEY KUBRICK'S 2001: A SPACE ODYSSEY


The late Stanley Kubrick's film 2001: A Space Odyssey portrayed a computer, HAL 9000, that appeared to be a conscious entity, especially given that it seemed capable of some forms of emotional expression. This article examines the film's portrayal of communication between HAL 9000 and the astronauts. Recent developments in the field of artificial intelligence (Al) (and synthetic emotions in particular) as well as social science research on human emotions are reviewed.
 
Interpreting select scenes from 2001 in light of these findings, the authors argue that computer-generated emotions may be so realistic that they suggest inner feelings and consciousness. Refinements in Al technology are now making such realism possible. The need for a less anthropomorphic approach with computers that appear to have feelings is stressed.
 

CONTENTS

Social Computers
On "Stage" With HAL
Human-Computer Communication in 2001
Emotions in a Digital Age
The Human-Computer Emotional Context
Conclusion
Notes
References


Social Computers

"Stop, Dave. I'm afraid, Dave. Please . . . stop."'These words ring familiar with many moviegoers. They were spoken not by a human character but by the HAL 9000 computer in the late Stanley Kubrick's science fiction epic 2001: A Space Odyssey. The context was a pending disconnection of HAL's higher order thinking circuits - a digital lobotomy - and the computer's use of emotional pleas to halt the plan. HAL's carefully delivered yet increasingly urgent words suggested consciousness as well as a survival instinct.

When 2001: A Space Odyssey premiered in 1968, the critical reception was mostly harsh. Harper's Magazine's Pauline Kael declared the film to be "monumentally unimaginative," and Arthur Schlesinger, Jr., complained that it was "morally pretentious" (Agel 1970, 246). For film critic Andrew Sarris (1968), it was simply "a disaster" (p. 45). Many praised the film for its exhilarating vistas and unprecedented special effects, though still expressing disappointment over its apparent lack of plot. Only a few critics gave the film unqualified accolades, such as The New Yorker's Penelope Gilliatt (1968), who proffered that it was "some sort of great film" because of its "startling metaphysics" (p. 150).

Eventually, Space Odyssey would go on to attain an important niche in film history, making the American Film Institute's top 100 list and becoming the subject of numerous academic analyses. Because it is a strongly visual experience (there are only 39 minutes of dialogue in the 139-minute final edit version), the interpretive approaches to the film have been diverse. Some have focused on 2001's similarities to Homer's Odyssey, finding embedded symbolism that parallels this epic (Hoch 1974; Wheat 2000). Sexual symbolism has also been explored (DeBellis 1993; Fisher 1972; Spector 1981; Wheat 2000).

It is quite clear, however, that the major thrust of scholarship on this film has been about the relationship between its human characters and the world of technology they inhabit. Much attention has been paid to 2001's machines, which although awe inspiring and aesthetically appealing are ultimately dehumanizing (Boylan 1985; Ciment 1972; Kolker 1984; Miller 1994). The most intriguing machine is, of course, HAL 9000, with its paradoxically humanlike mannerisms that contrast sharply with the coldly functional behavior of the astronauts. HAL has been assessed as a mythical nemesis that tests the mettle of a hero (Hoch 1974); a neglected creation, echoing the Frankenstein theme (Shelton 1987,256); an evolutionary step up from humankind (Midbon 1990); a neurotic personality (Garfinkel 1997; Parisi 1997); and a metaphor for a god created in humankind's own image (Wheat 2000).


On "Stage" with HAL

Some scholarship has deliberately ignored symbolism and narrative structure, instead dealing with the scientific plausibility of 2001's technological trappings. A recent book on artificial intelligence (AI) entitled HAL's Legacy: 2001's Computer as Dream and Reality (Stork 1997) treats the infamous computer as a benchmark for assessing both technical and ethical problems emerging in this field. Another book, 2001: Filming the Future (Bizony 2000), offers a retrospect on the production challenges of the movie as well as speculation on how its visions of space travel may still come to pass. One expert in computer interface design freely made references to HAL in her explanation of computers that sense and express emotion (Picard 1997b).

It is well documented that Stanley Kubrick, the meticulous auteur-director, never intended any "correct" interpretation of his work. Rather, he insisted that each viewer of his film should have a final say on its meaning. It has been suggested that Space Odyssey transcends the traditional science fiction genre because it strives for technical realism on one hand while making this plausible world mysterious through symbolism, intentional ambiguity, and metaphors (Freedman 1998) on the other. In a Playboy interview, Kubrick himself had explained that "2001 is a nonverbal experience; one that bypasses verbalized pigeonholing . . . I don't want to spell out a verbal road map for 2001 that every viewer will feel obliged to pursue or else fear he's missed the point" (Agel 1970, 328).

In the thirty-some years since 2001's premiere, we have yet to attain many of the technical achievements it depicts. However, the potential for computers that speak to us, convey emotions, and act "socially" now looms large. The conversing, humanlike HAL has become the most prescient image of Kubrick's opus on evolution and technology. Current advances in AI research have made real such features as synthesized vocalizations, digitally encoded emotional responses, and, some would argue, rudimentary computer consciousness (Dennett 1994, 1998; Franklin 1997; Kurzweil 1992, 1999).

Given this recent trajectory of technological advances, it is somewhat surprising that research on how humans and computers communicate with one another is not similarly burgeoning. In 2001, we get a look at how such communication can go terribly awry. Here, we look back on key scenes from this film that seem to contain cautionary messages about how we interact with our machines and reexamine these within the context of present-day technological accomplishments.


Human-Computer Communication in 2001

One might surmise that with the passage of years and the making of many subsequent films featuring computers, HAL would have been all but forgotten by now. In the film, HAL's presence is sensed primarily through a synthesized voice and strategically placed, red "eye" lenses. With the emergence of very sophisticated virtual interfacing, our images of computers have undergone substantial revision since 1968.

Here, we set out to examine 2001's depiction of communication between humans and computers in light of recent engineering developments that have made such talking and, indeed, "emotional" computers a nascent industry. We do not presume that Kubrick coyly placed any hidden messages in the sequences depicting such communication. Kubrick always expressed the wish for Space Odyssey to inspire thoughtful reflection on a variety of topics. One of those topics was AI, which dominated much of Kubrick's crafting of the film and in which his colleague Clarke also took keen interest. The director had in fact long worked on the story line and some scene sketches for the movie A.l. Artificial Intelligence, brought to completion by Steven Spielberg at the Kubrick family's request. In an interview with Joseph Gelmis (2001) shortly after 2001's release, Kubrick explained:

  One of the things we were trying to convey . . . was the reality of a world populated - as ours soon will be - by machine entities who have as much, or more intelligence as human beings, and who have the same emotional potentialities in their personalities as human beings. We wanted to stimulate people to think what it would be like to share a planet with such creatures. (P. 95)
 
Although these words, as well as the subsequent A.l., seem to suggest that Kubrick took the creation of self-conscious, feeling machines to be inevitable, we can still find much about HAL's depiction that tempers this view. HAL's voice was given an eerily detached quality, achieved in part by having actor Douglas Rain record the lines without the benefit of other actors or other parts of the script. As will be seen later in this article, the issue of whether HAL's emotions were indeed "genuine" was raised but never clarified, remaining unresolved throughout the film.

Our focus on HAL, the doomed, electronic antihero, thus stands apart from any specifically historical context of the film. We are not examining the movie's symbols and narrative in light of prior Kubrick films, the political turbulence of the 1960s, or any spiritual themes. Our interest is not even with the practicalities of hard technology but rather with the emergent social attributes and communication style that the character of HAL seems to have forecast.

Carefully looking at the exchanges between HAL and the human astronauts in 2001 can offer relevant insight into our own rapidly emerging world of talkative, expressive machines. As AI capabilities expand, computers are beginning to question us, sense our frustrations, offer us encouraging words, and entice us with seeming empathy. Select sequences of 2001, when examined with the hindsight of current AI and synthesized emotions research, suggest potential problems of communication between humans and their computers.

Ongoing applications in AI demonstrate a steady progression toward the kind of human-computer interaction that is depicted in 2001. A host of technological refinements have enabled emotional expression in textual and spoken computer feedback. Digital approximations of "emotion" are now being incorporated into the fuzzy logic processing of computers (Gershenfeld 1999; Picard 1997a, 1997b). Whether this all constitutes an illusory forgery of human emotions or some preliminary steps toward synthetic "consciousness," computers nevertheless appear and feel increasingly social. As Byron Reeves and Clifford Nass (1996) pointed out in their provocative book, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, communication in the postmodern age increasingly involves distinctly "social" exchanges with electronic technologies.

In 2001, unanticipated events occur when artificially generated emotional responses enter into the flow of human and computer interaction. Our focus on this theme of the film allows us to raise questions as to how linguists and social scientists might contribute to the actual development of "affective computing." This term serves as the title for Massachusetts Institute of Technology (MIT) media specialist Rosalind W. Picard's (1997b) book calling for interdisciplinary research to enhance the emotional ambience of the computer-user interface. Many working in the AI field are responding to the challenge in numerous ways (e.g., Breazeal 1998; Dennett 1998; Dyer 1987; Franklin 1997; Frijda and Swagerman 1987; Klein 1998; Kurzweil 1999; Wright, Sloman, and Beaudoin 1996).

Communication analysis of humans and their computers is now beginning to receive serious though still limited attention. The study of social interaction as a general phenomenon remains largely grounded in a paradigm that is restricted to mutual influence between human actors, even when the process is aided by interactive technologies. It has far less often been approached as merging into the realm of meaningful social exchanges between humans and computers themselves.

Most studies on how humans talk with and formulate attitudes about computers are laboratory experiments that replicate classic social psychology experiments but with computers taking the places of human confederates. Experimental subjects respond to such variables as computer-generated "authoritarianism," "gender demeanor," or level of "impatience" in much the same way as they would were human confederates to project the same qualities (Moon and Nass 1996; Nass, Moon, and Green 1997). People express higher levels of criticism about computer performance if they register their assessments on computers other than the ones used in the experiments (Nass, Moon, and Carney 1999). This suggests that even without overt emotional expression from computers, people tend to respond to these machines in ways that mirror encounters with other humans.

As intellectual discourse, 2001 draws attention to broader socioenvironmental factors that may come into play when people start to rely on thinking, emoting computers. What kinds of relations would be desirable in this sphere? What sorts of emotional responses from computers will be appropriate, and more important, will there be any dangers of emotional misinterpretation by either computers or their users?


Emotions in a Digital Age

We might look to sociology to get some sense of how emotions are created in connection with the surrounding social environment. Traditionally, sociologists have tended to regard emotions as "inner states," having little systematic connection with the dynamics of social interaction. Emotions have been treated as independent variables, exerting random and mostly unpredictable influences over human behavior. Cultural norms, not emotions, have been held to have primary importance.

The dominant view changed somewhat as sociologists such as Erving Goffman (1959, 1963) developed the "dramaturgical" perspective, arguing that people control certain emotions to meet underlying pragmatic goals or hidden agendas in day-to-day situations. As society becomes more differentiated, social cohesiveness depends more and more on requiring that people either feign certain emotions or conceal emotions from nonintimates in public settings.

Despite a growing body of published studies on the sociology of emotions (for reviews, see Bendelow and Williams 1998; Burkitt 1997; Kemper 1981, 1990; Thoits 1989; Williams 1998), there is little consensus on how to define human emotion. Both empirical studies and theoretical works tend to focus on selected aspects of emotional expression rather than pinning down a functional definition. Because emotions do not exist in any physical sense, they are inferred from human behaviors, communications, and physiological states. Emotions are thus recognized to be multidimensional, regarded as "emergent properties, located at the intersection of physiological dispositions, material circumstances, and socio-cultural elaboration" (Williams 1998, 750).

This complex nature of emotions has led to a proposal for an "open systems" frame of reference (Gordon 1990). This view holds that emotions are shaped by a multitude of factors, both in the formative stages and in the ongoing process of expression to other parties. Once felt, any emotion has the potential to generate and modify subsequent emotions. Emotions thus take form within an ever changing system of actors, past memories, sensory cues, physiological states, and situational and cultural interpretations.

Psychology has emphasized another approach to the issue. Much attention has been paid to identifying the physiological foundations of emotion (see, e.g., Funkenstein 1955; Laird 1989; Schachter 1964; Schachter and Singer 1962; Wentworth and Yardley 1994). Yet, even these approaches recognize that physiological responses are mediated by sociocultural factors, especially language. Along these lines, George Lakoff (1987) suggested that language incorporates metaphors for emotional states and that these often reflect distinct physiological changes that accompany emotions. People say they get "heated" when angry, find good news "uplifting," or feel "uptight." The chosen terms can be identified with such outcomes as elevated body temperature, greater absorption of oxygen, or muscular tensions. Although emotions can and do bring about such responses, there is also a growing body of evidence that rudimentary neurophysical conditions are necessary for certain emotions to ever occur (Damasio 1994; Goleman 1995; Griffith and Griffith 1994; LeDoux 1994,1998; Lyon 1994; Schachter 1964; Schachter and Singer 1962; Wentworth and Yardley 1994).

Sociology of emotions theorists have shown far less consensus, generally advocating either "positivist" or "constructionist" perspectives or some intermediary position between these two. Positivists, who acknowledge the psychological viewpoint, hold that emotions do have a fundamental grounding in biological realities (Freund 1998; Lyon 1998; Wentworth and Yardley 1994). They see social structures and social relationships as influencing emotions but only in that they link back to universally experienced neurological states (fear, for instance). These states are sometimes called "primary" or "coarse" emotions. "Secondary" emotions would then emerge within the context of social relations (resentment aimed at one who caused fear), although they are always derivatives of the more predictable primary emotions. The positivist perspective borrows extensively from the psychological study of the neurological foundations of emotion, although it brings interaction more into analytical scope. Although attention is paid to the parameters of social structure in shaping secondary emotions, the crucial role of biology in emotional genesis is still kept at the forefront.

Pursuing a different tact, social constructionists take the view that emotions find their genesis outside of the body and within the ongoing contexts of social settings. Emotions are thus not at all "objective" in nature but rather the outcomes of varying interpretive assessments. Emotional expression is seen as the result of individual adaptation to socially defined situations and cultural codes (Gordon 1990; Harre 1998; Scheff 1997; Scheff and Giddens 1994; Shott 1979). This perspective presumes a great degree of creative freedom in "working" on emotions.

In The Managed Heart: Commercialization of Human Feeling and other works, sociologist Arlie R. Hochschild (1979,1985) developed a unique constructionist perspective. Having observed and interviewed flight attendants and other service sector employees, Hochschild noted that they often portray inauthentic emotions. This ability to "surface act" by selectively using language, facial expressions, and other cues allows one to present what appear to be appropriate emotional responses. These are not genuinely felt but rather offer a convenient way to smooth interaction and reduce potential conflict. One's surface acting becomes a tool to elicit desired emotions from others or to simply provide the emotional "products" that customers expect.

Additionally, Hochschild (1979,1985) observed that workers could call up and then actually feel emotions deemed appropriate for specific situations, even when competing emotions were threatening to find expression. This latter response - "deep acting" - led Hochschild to conclude that we at times put great effort into the emotions we feel to bring them into greater congruency with culturally and socially mandated norms. Interview subjects spoke of getting beyond inauthentic emotions and "psyching up" or otherwise compelling themselves to feel the "right" way.

Hochschild (1979,1985) thus saw emotions either as marketable commodities or as very self-consciously induced states that must resonate with social and cultural decorum. This challenges the positivist view that their origins are primarily physiological. In attempting to find middle ground, some researchers have postulated that there may be certain types of emotions that are triggered by an interplay of social relations and neurological response, whereas other types are more the result of human deliberation and selection, though influenced by cultural conditioning (Kemper 1981, 1990). Klaus Scherer (1984), for example, has argued for a view that sees positivist theories as more suited to analyzing emotional genesis by looking at physiological "components." Constructionist theories are seen as best addressing the issue of how emotions can trigger other emotions, or change in their own right, during communicative interchanges.

Such efforts to reconcile the two perspectives suggest that neither one can adequately explain the full range of human emotional experience. There has yet to be a theory of human emotion clearly favored by social scientists. This may in part explain why computer interface designers have pursued the application of comparatively consistent biological models in their attempts to create more convincing and emotional dialogues between computers and their users.


The Human-Computer Emotional Context

One way of looking at 2001 is to see it as a hypothetical, cautionary exemplar of the probable risks when humans "surface act" and present nongenuine emotions in the company of their "emotionally sensitive" computers. Thus far, it appears that computer "emotions" function similarly: intentionally managed as part of the interface programming. Given this, it may be that computers are coming to represent the consummate surface actors, attaining a mimicry of the human capacity to express emotions without actually feeling them. To "say something with feeling" can indeed mean not to feel anything at all. Before examining this issue further, however, it would be useful to summarize recent work in AI, where human physiology has received considerable attention in designing emotional feedback capabilities for computers.

The majority of AI research to date has focused on the development of various computer sensors that mimic human anatomical receptors such as eyes, ears, and fingertips. Along with these, internal programming codes cue computers on how to communicate in light of the interpreted "sensations" (Frijda and Swagerman 1987; Klein, Moon, and Picard 1999; Stork 1997; Wright, Sloman, and Beaudoin 1996).

The "emotional" responses of a computer are thus seen to originate with the collection and subsequent categorization of sensory stimuli. With monitoring devices affixed to human bodies ("wearable computers"), emotional states can be gauged. The resulting sensory readings are correlated with any number of appropriate', computer responses that convey fitting emotional intonations. A Computer might sense frustration from the rising tone of impatience in a user's voice, along with telling facial gestures, and then respond with a calming vocal message such as "Please relax. I'm confident that we are getting close to a correct procedure."

Today's AI frontier has also been pressing to go beyond this "hard-wired" model of communicative feedback that merely appears "emotional" in the surface-acting sense. Close attention is being paid to human brain studies, ostensibly to find out if microchip approximations of emotion-inducing brain functions are possible. The current technology of magnetic resonance imaging (MRI) has been relied on to map out neuronetworks that accompany specific thoughts and emotions. These can then guide the development of facsimile modes of computer processing that produce emotional response.

Most AI scientists have thus taken the approach that emotions are tangible and internally located "things" that can be synthetically replicated from a combination of digitized sensory input and programming protocol that mimics human brain MRI patterns. References are made to emotions that can be used in the "appropriate" way by computers, although it is acknowledged that computer intelligence will ultimately need autonomous emotional expression to engage in complex thinking. The concept of emotion remains a slippery one, and sometimes a computer process is rather arbitrarily labeled with a human emotional attribute. Picard (1997b), for example, described what she interpreted to be a computer's state of "grief":

  Suppose that a computer's primary user dies or otherwise terminates their relationship. The computer will need to update all its links that involve that relationship: a wealth of information. The more significant the relationship, the more changes need to be made. The manifestation of this state, which might be termed ``grief," is constant interruptions to processes as they stumble onto no-longer valid links, and have to be fixed. (P. 169)
 
Although human grief (which itself has myriad manifestations) may be an apt metaphor for this process, problems may occur if one perceives and responds to the computer as though its actions equate with one's own subjective experiences of grief. Selectively scanning and fixing links that are no longer useful would not appear much different from other common processes such as file cleaning and defragmenting. Humans may be tempted to respond to such machinations as though they are encountering a "feeling" entity simply because the output resembles familiar emotional expression. Although Picard and others have asserted that computer emotions will be distinctly different from our own, they have nevertheless drawn frequent parallels with human emotions. In doing so, they have invited a style of communication with computers that gives them increasingly equitable status with humans.

Recent neurological science findings, summarized in such books as Emotional Intelligence: Why It Can Matter More than IQ (Goleman 1995), Descartes' Error: Emotion, Reason, and the Human Brain (Damasio 1994), The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Damasio 1999), and The Emotional Brain: The Mysterious Underpinnings of Emotional Life (LeDoux 1998), have had a major impact on the AI field. Researchers envision computer emotions that will function in much the same way as those of humans: to assist in complex thinking activity. There is a growing consensus that although emotions can at times be disruptive to logical thought, they are nevertheless essential to the higher order thinking patterns that underlie language use and the interpretation of situations. Emotions have been found essential in spurring problem-solving shortcuts, thus circumventing the time-consuming digressions that would occur in purely logical modes of reasoning. They have also been found to play an important role in the recognition of complex situational cues.

In pursuing this tack, biology, rather than social science, gets major attention. For example, sophisticated designs for replicating the influence of hormones have been one goal of the Cog Project at MIT. Digitally coded "virtual hormones" subtly buffer this computer-robot's logical and self-monitoring capabilities as they are released into its data banks according to a specified timetable (Dennett 1994, 1997). Another computerized robot has been "trained" not only to recognize its user's emotions but also to respond with "emotions" of its own. When it perceives that a person is providing excessively redundant stimuli, it will close its camera-like eyes and. begin to ignore the input, as though bored (Breazeal 1998, 1999). Yet, processes of human physiology vary extensively by age, lifestyle, socialization, nutrition, and a host of undetected factors. They appear to function in accordance with rules of both randomness and rote logic. Replicating them for AI would seem to necessarily introduce uncertainties over how and when such "emotions" are merged with the functions of logic and problem solving.

AI research to date has made fewer inroads into understanding how emotions interweave with sociocultural and situational factors (Faulkner 1998; Nardi 1996). There is still an emphasis on treating computer emotions as primarily "internal" to programming languages and sensory modalities. Although this view mirrors some of the assumptions of the positivist sociological approach, it still minimizes the role of situational context and its resulting influences on the social relations between computer and user. Sociologist Martha Copp (1998) suggested that not only do we work "on" emotions but they can often work "against" us. Despite understandings of the general protocol for emotion management, a situation may introduce additional and unforeseen influences, such as demands for ideological commitment. These can compel emotional expression in unexpected ways and lead to emotion management failure. Social constraints imbue all situations, and they can and do shape emotions in unexpected ways, despite efforts to "manage" them.

Here is where certain actions of 2001's HAL become insightful. Although its apparent regard for "mission success" is an ideological disposition, this conflicts with the goal to maintain harmonious and collegial relationships with the human crew. HAL's dialogue with astronaut Dave Bowman, wherein it seems to be "feeling out" Dave's own view of the rather mysterious preparations for the mission, conveys a sense of "anxiety." It seems the computer is seeking corroboration with its need to know more as it asks an unexpected question midstream in a dialogue about Dave's sketchpad drawings:

  HAL: That's a very nice rendering, Dave. I think you've improved a great deal. Can you hold it a bit closer?
  Dave: Sure.
  HAL: That's Dr. Hunter, isn't it?
  Dave: Uh-huh.
  HAL: By the way. Do you mind if I ask you a personal question?
  Dave: No, not at all.
  HAL: Well, forgive me for being so inquisitive, but during the past few weeks I've wondered whether you might be having some second thoughts about the mission.
  Dave: How do you mean?
  HAL: Well . . . it's rather difficult to define. Perhaps I'm just projecting my own concern about it. I know I've never completely freed myself of the suspicion that there are some extremely odd things about this mission. I'm sure you'll agree there's some truth in what I say.
  Dave: [Pauses] Well, I don't know. That's a rather difficult question to answer. [Dave's facial expression here clearly indicates that he has become uncomfortable with the unexpected question, but he quickly reacts to convey a nonplussed appearance, what Hochschild (1979, 1985) would call "surface acting."]
  HAL: You don't mind talking about it, do you, Dave?
  Dave: No, not at all.
  HAL: Well . . . certainly no one could have been unaware of the very strange stories floating around before we left . . . rumors about something being dug up on the moon. I never gave these stories much credence, but particularly in view of some of the other things that have happened I find them difficult to put out of my mind. For instance, the way all our preparations were kept under such tight security . . . and the melodramatic touch of putting Drs. Hunter, Kimball, and Kaminsky aboard already in hibernation after four months of separate training on their own.
  Dave: You're working up your crew psychology report. [Here, Dave abruptly shifts the content of the dialogue and attributes an ulterior motive to HAL's questioning.]
  HAL: Of course I am. Sorry about this, I know it's a bit silly. Just a moment . . . just a moment ... I've just picked up a fault in the AE-35 unit .... [The conversation again takes a sudden change of course, with Dave asking HAL questions about the communications device and a plan to check its reliability.]
 
From a social science, not a programming, viewpoint, we might interpret this episode differently, especially when considering Dave's facial expressions of uneasiness in the scene, just as HAL does. Keeping Hochschild's (1979, 1985) notion of surface acting in mind, one could argue that HAL, despite seeming emotional stress, was nevertheless steadfastly committed to a path of relentless logic. In communicating with Dave, it used emotional ambience to ease the discussion, but along with this, it surmised Dave's "illogical" (and mission-threatening) desire to conceal knowledge. A diagnosis was immediately made that Dave, as well as the other humans, was a "faulty" element in the total operational system of crew, ship, and ground control. HAL then proceeded to falsely report a defect with the communications device, the first step in a calculated plan to do away with all the crew.

In our unique interpretation, HAL's "emotions" cannot really be called to task for failing; the computer made a rational calculation in response to confusing emotional input from a human actor. Although emotional expressions entered into the mix, for HAL, they were used instrumentally to assess as well as influence the performance of the crew. They did not mirror any inner states such as enjoyment, personal fondness, or sympathy.

If HAL can be said to have anything close to "feelings," it seems most likely that these run no deeper than a steady disposition to assure optimal functioning of engines, circuits, sensors, and other "machines" on board the Discovery spacecraft. We might consider that astronaut Dave Bowman's own emotional manipulation (i.e., his surface acting of nonchalance) proved disastrously at odds with the swift logical functioning of HAL. He was not forthright with the computer because he felt that it could not be trusted, responding to it in terms of a human "double-crosser." This anthropomorphizing response is not so far fetched, as Reeves and Nass (1996) have already documented.

What of the possibility of more humanlike emotions, of a spiteful or jealous HAL? In On Understanding Emotion, sociologist Norman Denzin ( 1984) postulated that human emotions cannot exist without a sense of selfhood; that they are necessarily part of the process of self-regard and self-feeling. Emotions are constantly being placed into a personalized interpretive framework. They establish a world of personal meaning, but they do so as part of the ongoing experiences of interaction, arrival at mutual understandings, and negotiated reality with others. In short, emotions are intersubjective; they emerge from the process of meaningful exchanges between conscious entities that take feelings to be located in understandings of selfhood.

Although the necessary conditions for emotions can be located in physiology or sociocultural interpretations, they nevertheless must reflect back on the self to become real. Part of this process is what Denzin (1984) called "emotional accounts," or justifications that are created to validate the emotions felt and perceive them as appropriate under given circumstances. Although emotions do vary by cultural background, accumulated knowledge, and ideological dispositions, they-must be housed in an awareness of self as an ongoing entity for self-reflection and self-regard.

A computer may "see" that its user looks angry, recognize that the situation calls for more careful and measured deliberation, and incorporate a slower mode of communication (a digital approximation of relaxation). It would then produce what appear to be the "appropriate" emotional responses, drawn out of correlation with similar situational contexts coded in its memory banks. Without a sense of selfhood, however, a computer is, even under such conditions, simply adhering to prescribed protocol of programming models, unencumbered by a consciousness of self-awareness.

Perhaps this is all HAL was doing when it matter-of-factly replied to a visibly upset Bowman after the other astronauts had been killed, "I can see that you're really upset about this." Could it be that even such an advanced and naturally conversant computer as the fictional HAL is a nonconscious entity that just does a convincing job of communicating as a human being? After Bowman discovers the fate of his comrades, HAL advises him to "relax" and "take a stress pill and think things over." HAL does not start to sob and express regret for its actions, nor does it fire angry, threatening words at Dave. A careful review of scenes suggests that all of HAL's "emotional" responses are methodically matched to situations, within a broader framework of a coolly detached logic. This suggests that although convincingly "emotional" computers may communicate as humans with their operators, they may be able to do so while completely lacking any sense of selfhood.

In fairness, we should acknowledge those AI researchers who assert that computers may at some point interpret things emotionally. They foresee a computer's capacity to relate sensory input to its past experience as well as a sense of how it is unique in the ongoing stream of such experience (see, e.g., Kurzweil 1992,1999). This view presumes the creation of "digital consciousness," with the attendant capacity to make decisions with a concern for self interest and a sense of self-awareness. Nevertheless, advocates of computer consciousness have been roundly criticized (Dreyfus 1991, 1996; Penrose 1994,1997). Debates are certain to continue, then, over whether AI can ever produce emotions that are in any way "genuine." This concern was rather pointedly brought up in 2001's BBC interview scene:

  Amer: In talking to the computer, one gets the sense that he is capable of emotional responses. For example, when I asked him about his abilities, I sensed a certain pride in his answer about his accuracy and perfection. Do you believe that HAL has genuine emotions?
  Poole: Well, he acts like he has genuine emotions. Uhm - of course, he's programmed that way, to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.
 
From this dialogue, viewers of 2001 are left with the suggestion that HAL's convincing portrayal of human feelings may be something more than rote responsiveness to programming prompts. Kubrick's film clearly raised the possibility that AI entities may someday feel emotions in some way, but the film is not definitive on this score, instead keeping HAL's inner nature ambiguous. This has proven prophetic, for despite the advances in digital technology since the film, our knowledge about computer-generated emotions is still fraught with uncertainties, not the least of which is pinning down the real nature of "consciousness," the essential condition for any sense of selfhood.

From a social interaction standpoint, the significant element about HAL's dialogue with the crew is not that the computer uses expressions such as "sorry" or "you don't mind talking about it?" but that astronauts Bowman and Poole readily respond in a like manner. HAL is still what we might call a "thing," distinguished from the world that we classify as living or organic. HAL invites social interaction as though it were "alive" and "socially aware." Things can become social if people accept definitions that confer such status. In the area of computer use, people have been described as "responding mindlessly to computers to the extent that they apply social scripts - scripts for human-human interaction - essentially ignoring the cues that reveal the asocial nature of a computer" (Nass and Moon 2000, 83).

Ascribing social characteristics to the asocial is not solely a matter of choosing to ignore cues. Material artifacts are often the end products of highly social behavior, and they continue to convey embedded meanings about that behavior. This point was more thoroughly explored by Alex Preda (1999), who examined the way in which the world of things, particularly laboratory instruments, can induce certain presumptions of how to act and think when using them. Preda went on to suggest that the tradition of rationalistic dualism can be challenged when we consider the social conditioning that affects our views of material artifacts:

  The asymmetry of the categories "human" and "nonhuman" (echoed in the fundamental distinctions between society and nature, humans and things) is not self-evident, given, or natural but is rather the product of socially sanctioned classification operations and categories, and therefore an epistemic operation to be examined in its own social character. (P. 356)
 
Things can indeed take on social traits and inspire resulting social responses when they are encountered. This is strongly implied in Kubrick's use of the BBC interview scene, in which it is suggested that the distinction between a "programmed" and a "feeling" entity is blurred for the astronauts. Even though they know that HAL is a machine, its humanlike functioning fosters suspicion that "he" may be so sophisticated that at some level, there is a capacity for "real feelings." We may reflect on the following words of computer scientist Douglas Lenat (1997), telling ourselves that they ring true, but that does not necessarily mean that our behavior or our own emotions will remain so steadfastly anchored in their logic:

  A computer may pretend to have emotions, as part of what makes for a pleasing user interface, but it would be as foolish to consider such simulated emotions as real as to think that the internal reasoning of a computer is carried out in English just because the input/output interface uses English. (P. 208)
 

Conclusion

Looking back on how humans interact with a supercomputer in 2001: A Space Odyssey, we are led to conclude that many forthcoming challenges of dealing with synthetic emotions in AI will be more social than technical. Although a work of fiction, this film can nevertheless alert us to specific pitfalls that can occur when human actors embark on emotion-laden interchanges with computers. We may risk overlooking the logical framework that underpins the emotional messages we receive, falsely attributing humanlike motives for complex but nevertheless mechanistic actions. Computers, too, may be prone to misinterpretation of human emotional cues. Although this is not unusual among human actors, the high-speed logical processing and problem solving of computers creates a far greater level of asymmetry in situations of emotional exchange.

The script of 2001 gives us a key conversation that clearly contrasts with the earlier BBC scene and its suggestion that HAL's emotions are merely programmed for ease of its interaction with the crew. The astronauts have concluded that HAL has made an erroneous diagnosis of the communications device, and thinking they are conversing out of "earshot" of HAL, they broach the subject of disconnection:

  Dave: We'd have to cut his higher brain functions [Frank: Mm-hmm.] without disturbing the purely automatic and regulatory systems and we'd have to work out the transfer procedures and continue the mission under ground-based computer control.
  Frank: Yeah. Well, that's far safer than allowing HAL to continue running things.
  Dave: You know, another thing just occurred to me.
  Frank: Mm.
  Dave: Well, as far as I know no 9000 computer's ever been disconnected.
  Frank: Well, no 9000 computer's ever fouled up before.
  Dave: That's not what I mean.
  Frank: Hmm?
  Dave: Well, I'm not so sure what he'd think about it ....
 
Here we discover something startling: astronaut Dave Bowman, at some emotional level, cares about HAL's existence; not only is he apprehensive about disconnecting HAL, but he feels uneasy about communicating the possibility to the computer. Even though he acknowledged earlier that the computer's emotional responses were "programmed that way," he now appears to regard HAL as a sentient being, just as worthy of compassion as any human.

Perhaps HAL was right, inasmuch as it is invariably "human error" that leads to disaster, human hubris in becoming so enamored with our "emotional" computers that we accord them a status that incongruously stands alongside our own. We allow ourselves to forget that our computers are not human after all, believing in the illusion that they can in some way care about us or, alternatively, secretly wreak vengeance on us. When their exceptionally quick and rational actions disappoint or create unanticipated results, we may then even condemn them with human standards of morality (Dennett 1997; Friedman 1995; Moon and Nass 1998), as many have done by asserting that HAL was "guilty" of murder.

Near the film's end, Dave Bowman does proceed to disconnect HAL, keeping intently focused on the mechanics of the operation while ignoring varied pleas for him to stop. HAL at first sounds repentant, admitting poor judgment and promising that its work "will be better." "He" then moves to self-assuredness, asserting "the greatest confidence and enthusiasm" for the mission. When these efforts fail, the portrayal of fear is given a try: "Stop, Dave. I'm afraid, Dave." Such manipulative pleas, intended to alter behavior that has been deduced to be irrational, no longer register with Bowman, for he has already witnessed the unanticipated consequences of emotional interchange with a computer whose thinking processes are far faster than his own. HAL is disconnected, but the scene's tension, masterfully created, lies in the audience's wondering whether Bowman will be swayed by some illogical yet humanly felt "compassion" for a thinking machine that can nevertheless "say it with feeling."


Notes

  1. All dialogue was transcribed directly from the film by the authors.
  2. 2001: A Space Odyssey was actually a collaborative work of film director Stanley Kubrick and novelist Arthur C. Clarke. It is loosely based on The Sentinel, an earlier short story by Clarke. However, Clarke wrote the screenplay along with Kubrick, and during filming, Clarke finished the novel that shares the movie's title. The novel is a very different work from the film, interpreting scenes that the film intentionally leaves open to speculation. Clarke also later claimed that several of his stories were thematically related to 2001, not only The Sentinel.
  3. The literature on artificial intelligence (AI) research is extensive and constantly responding to new developments. These and other references are merely representative of the field.
  4. The problem of consciousness has received recent and widespread attention, much of it focused on the issue of whether synthetic consciousness is at all possible. For additional views on the plausibility of artificial consciousness, see Dennett ( 1992, 1994, 1998). For a recent challenge to the possibility of synthetic consciousness, see McGinn (2000). For a discussion of the difficulty in determining whether or not consciousness exists in material or even living entities, see Chalmers (1997).


References

Agel, Jerome, comp.1970. The making of Kubrick's 2001. New York: New American Library.

Bendelow, Gillian, and Simon J. Williams, eds. 1998. Emotions in social life: Critical themes and contemporary issues. London: Routledge.

Bizony, Piers. 2000. 2001: Filming the future. London: Aurum.

Boylan, Jay H.1985. Hal in 2001: A Space Odyssey: The Lover sings his song. Journal of Popular Culture 18:53-56.

Breazeal, Cynthia. 1998. Regulating human-robot interaction using "emotions," "drives" and facial expressions. In Proceedings of 1998 Autonomous Agents Workshop, Agents in Interaction - Acquiring competence through imitation. Minneapolis, MN.

Breazeal, Cynthia. 1999. A context-dependent attention system for a social robot. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, edited by T. Dean,114651. San Francisco: Morgan Kaufmann.

Burkitt, Ian. 1997. Social relationships and emotions. Sociology 31:37-55.

Chalmers, David J. 1997. The conscious mind: In search of a fundamental theory. London: Oxford University Press.

Ciment, Michel.1972. The odyssey of Stanley Kubrick: Part 3: Toward the infinite. In Focus on the science fiction film, edited by William Johnson, 131-41. Englewood Cliffs, NJ: Prentice Hall.

Copp, Martha. 1998. When emotion work is doomed to fail: Ideological and structural constraints on emotion management. Symbolic Interaction 21:299-328.

Damasio, Antonio R.1994. Descartes' error: Emotion, reason, and the human brain. New York: Putnam.

Damasio, Antonio R.1999. The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt Brace.

DeBellis, Jack. 1993. The awful power: John Updike's use of 2001: A Space Odyssey in Rabbit Redux. Literature and Film Quarterly 21:209-17.

Dennett, Daniel C. 1992. Consciousness explained. New York: Little, Brown.

Dennett, Daniel C. 1994. Consciousness in human and robot minds. Paper presented to the Royal Society, London, 14 April.

Dennett, Daniel C. 1997. When HAL kills, who's to blame? In HAL's legacy: 2001 's computer as dream and reality, edited by David G. Stork, 351 -65. Cambridge, MA: MIT Press.

Dennett, Daniel C. 1998. Brainchildren: Essays on designing minds, 1984-1996. New York: Bradford.

Denzin, Norman. 1984. On understanding emotion. San Francisco. Jossey-Bass.

Dreyfus, Hubert. 1991. What computers still can't do: A critique of artificial reason. Cambridge, MA: MIT Press.

Dreyfus, Hubert. 1996. Response to my critics. Artificial Intelligence 80:71-191.

Dyer, Michael George. 1987. Emotions and their computations: Three computer models. Cognition and Emotion 1:323-47.

Faulkner, Christine.1998. The essence of human-computer interaction. Englewood Cliffs, NJ: Prentice Hall.

Fisher, J. 1972. Too bad Lois Lane: The end of sex in 2001. Film Journal 2:65.

Franklin, Stan. 1997. Artificial minds. Cambridge, MA: MIT Press.

Freedman, Carl.1998. Kubrick's "2001" and the possibility of a science-fiction cinema. Science Fiction Studies 25:300-319.

Freund, Peter E. S. 1998. Social performances and their discontents: The biopsychosocial aspects of dramaturgical stress. In Social perspectives on emotions, Vol.2, edited by David D. Franks, 268-94. Greenwich, CT: JAI Press.

Friedman, B.1995. It's the computer's fault - Reasoning about computers as moral conflicts. In Proceedings of the CHI Conference on Human Factors and Computing Systems. New York: Association for Computing Machinery.

Frijda, Nico H., and Jaap Swagerman.1987. Can computers feel? Theory and design of an emotional system. Cognition and Emotion 1 :235-57.

Funkenstein, Daniel.1955. The physiology of fear and anger. Scientific American, May,74-80.

Garfinkel, Simonson. 1997. Happy birthday HAL. Wired, January, 120-25, 186-88.

Gelmis, Joseph. 2001. The film director as superstar: Stanley Kubrick. In Stanley Kubrick: Interviews, edited by Gene D. Phillips, 80-104. Jackson: University of Mississippi Press.

Gershenfeld, Neil A. 1999. When things start to think. New York: Henry Holt.

Gilliatt, Penelope. 1968. After man. The New Yorker, 13 April, 150-52.

Goffman, Erving. 1959. The presentation of self in everyday society. New York: Doubleday.

Goffman, Erving. 1963. Behavior in public places: Notes on the social organization of gatherings. New York: Free Press.

Goleman, Daniel. 1995. Emotional intelligence: Why it can matter more than IQ. New York: Basic Books.

Gordon, Steven L. 1990. Social structural effects on emotions. In Research agendas in the sociology of emotions, edited by Theodore D. Kemper,145-79. Albany: State University of New York Press.

Griffith, J. L., and M. E. Griffith. 1994. The body speaks: Therapeutic dialogues for mind -body problems. New York: Basic Books.

Harre, Rom. 1998. Emotions across cultures. Innovation 11:43-52.

Hoch, David. 1974. Mythic patterns in 2001: A Space Odyssey. Journal of Popular Culture 4:960-65.

Hochschild, Arlie R. 1979. Emotion work, feeling rules, and social structure. American Journal of Sociology 87:336-61.

Hochschild, Arlie R. 1985. The managed heart: Commercialization of human feeling. Berkeley: University of California Press.

Kemper, Theodore D. 1981. Social constructionist and positivist approaches to the sociology of emotions. American Journal of Sociology 87:336-62.

Kemper, Theodore D., ed. 1990. Research agendas in the sociology of emotions. Albany: State University of New York Press.

Klein, Jonathan. 1998. Computer response to user frustration. Master's thesis, Massachusetts Institute of Technology.

Klein, Jonathan, Youngme Moon, and Rosalind W. Picard. 1999. This computer responds to user frustration. Technical report no. 502. Cambridge, MA: MIT Media Laboratory, Vision and Modeling Group.

Kolker, Robert Philip. 1984. Tectonics of the machine man: Stanley Kubrick. In A cinema of loneliness: Penn, Kubrick Coppola, Scorsese, Altman, edited by Robert Philip Kolker, 69138. New York: Oxford University Press.

Kurzweil, Raymond. 1992. The age of intelligent machines. Cambridge, MA: MIT Press.

Kurzweil, Raymond. 1999. The age of spiritual machines: When computers exceed human intelligence. New York: Viking.

Laird, James D. 1989. Mood affects memory because feelings are cognitions. Journal of Social Behavior and Personality 4:33-37.

Lakoff, George. 1987. Women, fire, and dangerous things: What categories reveal about the mind. Chicago: University of Chicago Press.

LeDoux, Joseph E. 1994. Emotion, memory and the brain. Scientific American, June, 50-57.

LeDoux, Joseph E. 1998. The emotional brain: The mysterious underpinnings of emotional life. New York: Touchstone.

Lenat, Douglas B. 1997. Common sense and the mind of HAL. In Hal's Legacy: 2001's computer as dream and reality,v, edited by David G. Stork,193-209. Cambridge, MA: MlT Press.

Lyon, Margot L. 1994. Emotion as mediator of somatic and social processes: The examination of respiration. In Social perspectives on emotions, Vol.2, edited by David D. Franks,83- 108. Greenwich, CT: JAI.

Lyon, Margot L. 1998. The limitations of social constructionism in the study of emotions. In Emotions in social life: Critical theories and contemporary issues, edited by G. Bendelow and W. J. Simon, 293-317. London: Routledge.

McGinn, Colin. 2000. Mysterious flame: Conscious minds in a material world. New York: Basic Books.

Midbon, Mark. 1990. Creation machines: Stanley Kubrick's view of computers in 2001: A Space Odyssey. Computers and Society 20:7-12.

Miller, Mark Cuspin. 1994. 2001: A cold descent. Sight and Sound, January, 18-25.

Moon, Youngme, and Clifford Nass. 1996. How "real" are computer personalities? Communication Research 23:651-74.

Moon, Youngme, and Clifford Nass. 1998. Are computers scapegoats? Attributions of responsibility in human-computer interactions. International Journal of Human-Computer Studies 49:79-94.

Nardi, Bonnie A., ed. 1996. Context and consciousness: Activity theory and human-computer interaction. Cambridge, MA: MIT Press.

Nass, Clifford, and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of Social Issues 56:81-103.

Nass, Clifford, Youngme Moon, and Paul Carney. 1999. Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied and Social Psychology 29:1093-110.

Nass, Clifford, Youngme Moon, and Nancy Green.1997. Are computers gender-neutral? Gender-stereotypic responses to computers with voices. Journal of Applied and Social Psychology 27:864-76.

Parisi, Paula. 1997. The intelligence behind AI. Wired, January, 132, 189.

Penrose, Roger. 1994. Shadows of the mind: A search for the missing science of consciousness. Oxford, UK: Oxford University Press.

Penrose, Roger. 1997. The large, the small, and the human mind. Cambridge, UK: Cambridge University Press.

Picard, Rosalind W. 1997a. Does HAL cry digital tears? Emotions and computers. In HAL's legacy: 2001 's computer as dream and reality, edited by David G. Stork,279-303. Cambridge, MA: MIT Press.

Picard, Rosalind W. 1997b. Affective computing. Cambridge, MA: MIT Press.

Preda, Alex. l 999. The turn to things: Arguments for a sociological theory of things. Sociological Quarterly 40:347-66.

Reeves, Byron, and Clifford Nass.1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge, MA: Cambridge University Press.

Sarris, Andrew. 1968. 2001: A Space Odyssey (review). Village Voice, 11 April, 45.

Schachter, Stanley. 1964. The interactions of cognitive and physiological determinants of emotional state. Advances in Experimental and Social Psychology 1:49-90.

Schachter, Stanley, and Jerome E. Singer. 1962. Cognitive, social and physiological determinants of emotional state. Psychological Review 69:813-36.

Scheff, Thomas J. 1997. Emotions, the social bond and human reality: Part/whole analysis. Cambridge, MA: Cambridge University Press.

Scheff, Thomas J., and Anthony Giddens. 1994. Microsociology: Discourse, emotion, and social structure. Chicago: University of Chicago Press.

Scherer, Klaus. 1984. On the nature and function of emotion: A component-process approach. In Approaches to emotion, edited by Klaus Scherer and Paul Ekman,293-317. Hillsdale, NJ: Lawrence Erlbaum.

Shelton, Robert. 1987. Rendezvous with HAL: 2001-2010. Extrapolation 28:255-68.

Shott, Susan. 1979. Emotions and social life: A symbolic interactionist analysis. American Journal of Sociology 84:1317-34.

Spector, Judith A. 1981. Science fiction and the sex war: A womb of one's own. Literature and Psychology 31:21-32.

Stork, David G., ed. 1997. Hal's legacy: 2001's computer as dream and reality. Cambridge, MA: MIT Press.

Suchman, Lucy A.1987. Plans and situated actions: The problem of human-machine communication. Cambridge, UK: Cambridge University Press.

Thoits, Peggy A. 1989. The sociology of emotions. Annual Review of Sociology 15:317-42.

Wentworth, W. M., and D. Yardley. 1994. Deep sociality: A bioevolutionary perspective on the sociology of human emotions. In Social perspectives on emotions, Vol.2, edited by David D. Franks, 21-55. Greenwich, CT: JAI.

Wheat, Leonard. 2000. Kubrick's 2001: A triple allegory. Landham, MD: Scarecrow.

Williams, Simon J. 1998. Modernity and emotions: Corporeal reflections on the (ir)rational. Sociology 32:747-69.

Wright, Ian, Aaron Sloman, and Luc Beaudoin. 1996. Towards a design-based analysis of emotional episodes. Philosophy, Psychiatry and Psychology 3: 101-26.
 

This paper is an imprint of The Underview.

Quotes must be attributed to this source.
It was originally published in the January 2002 edition of
Journal of Communication Inquiry,
Copyright © 2002 Sage Publications.

 
Return to top

Return to The Hal Papers home page

Return to The Underview home page

Copyright © 2003; 2008