‘2001: A Space Odyssey’ turns 50: Why HAL endures

Even after five decades of technological advancement, the murderous artificial intelligence in Stanley Kubrick’s philosophical sci-fi film remains the definitive metaphor for technology’s dark side.

|
Turner Entertainment/AP
Astronaut David Bowman (Keir Dullea) peers through his space helmet as he shuts down the malevolent HAL 9000 computer in Stanley Kubrick's 1968 film, '2001: A Space Odyssey.'

“I’m sorry, Dave. I’m afraid I can’t do that.”

With those nine words, HAL-9000, the sentient computer controlling the Jupiter-bound Discovery One, did more than just reveal his murderous intentions. He intoned a mantra for the digital age.

In the 50 years since the US premiere of Stanley Kubrick’s “2001: A Space Odyssey,” virtually everyone who has used a computer has experienced countless HAL moments: “an unexpected error has occurred,” goes the standard digital non-apology. The machine whose sole purpose is to execute instructions has chosen, for reasons that are as obscure as they are unalterable, to do the opposite.

There’s something about HAL’s bland implacability that makes him such an enduring symbol of modernity gone awry, and such a fitting vessel for our collective anxiety about an eventual evolutionary showdown against our own creations.

“HAL is the perfect villain, essentially...,” says John Trafton, a lecturer in film studies at Seattle University who has taught a course on Stanley Kubrick through the Seattle International Film Festival. “He’s absolutely nothing except for a glowing eye.... Essentially we’re just projecting our own fears and emotions onto HAL.”

HAL’s actual screen time is scant, beginning an hour into the nearly three-hour film and ending less than an hour later. And yet, during that interlude, his personality eclipses those of the film’s humans, whom Roger Ebert described in his 1968 review as “lifelike but without emotion, like figures in a wax museum.”

While the film’s human characters joylessly follow their regimens of meals, meetings, exercise routines, and birthday greetings, we see HAL, whose his name stands for “Heuristically programmed ALgorithmic computer,” expressing petulance, indecisiveness, apprehension, and at the end, remorse and dread.

It’s this blending of human emotionality with mathematical inflexibility that some experts find troubling. Human biases have a way of creeping into code for mass-produced products, giving us automatic soap dispensers that ignore dark skin, digital cameras that confuse East Asian eyes with blinking, surname input fields that reject apostrophes and hyphens, and no shortage of other small indignities that try to nudge us, however futilely, into the drab social homogeneity of Kubrick’s imagined future.

“One of the things that makes HAL a really enduring character is he faces us with that kind of archetypal technological problem, which is that it’s a mirror of our own biases and predilections and things that we are maybe not conscious of,” says Alan Lazer, who teaches courses including “The Films of Stanley Kubrick” at the University of Alabama in Tuscaloosa.

Moral machines?

Machine learning – a programming method in which software can progressively improve itself through pattern recognition – is being used in more walks of life. For many Americans, artificial intelligence is shaping how our communities are policedhow we choose a college and whether we get admitted, and whether we can get a job and whether we keep it.

Catherine Stinson, a postdoctoral fellow at the University of Western Ontario who specializes in philosophy of science, cautions that the software engineers who are writing the algorithms governing more and more socially sensitive institutions lack training in ethics.

“Everybody thinks that they are an expert in ethics. We all think that we can tell right from wrong that if presented with a situation we’ll just know what to do,” says Dr. Stinson. “It’s hard for people to realize that there there are actually experts in this and there is space for expertise.”

In an op-ed in The Globe and Mail published last week, Dr. Stinson echoed Mary Shelley’s warning in “Frankenstein,” a novel that turned 200 this year, of what happens when scientists attempt to exempt themselves from the moral outcomes of their creations.

She points out that MIT and Stanford are launching ethics courses for their computer science majors and that the University of Toronto already has long had such a program in place.

Other groups of computer scientists are trying to crowdsource their algorithm’s ethics, such as MIT’s Moral Machine project, which will help determine whose lives – women, children, doctors, athletes, business executives, large people, jaywalkers, dogs – should be prioritized in the risk-management algorithms for self-driving cars.

But those who crowdsource their ethics are ignoring the work of professional moral theorists. Stinson notes that many computer scientists have an implicit orientation to utilitarianism, an ethical theory that aims to maximize happiness for the greatest number by adding up each action’s costs and benefits.

Utilitarianism enjoys support in American philosophy departments, but it’s far from unanimous. Critics charge that such an approach denies basic social and familial attachments and that it permits inhumane treatment in the pursuit of the greatest good.

Ordinary people tend to hold a mix of utilitarian and non-utilitarian views. For instance, most survey participants say that self-driving cars should be programmed to minimize fatalities. But when asked what kind of self-driving car they’d be willing to buy, most people say they would want one that prioritizes the lives of the vehicle’s occupants over all else.

Either way, there’s something undeniably creepy about dealing with an autonomous machine that reduces your personal worth and dignity down to code. “We can’t use our human wiles on them,” says Stinson.

It’s this disquiet that HAL evokes that Matthew Flisfeder, a professor at the University of Winnipeg department of rhetoric, writing, and communications says is the same unease we feel when our social choices are determined by impersonal forces of the market.

“There’s this constant goal,” says Dr. Flisfeder, “to try to be efficient and objective and rational, and when we see that presented to us back in the form of the dryness of a machine like HAL, we started to realize the instrumentality in that and how it’s actually very dehumanizing.”

Predicting technology’s triumph over humanity was not, however Kubrick’s aim. HAL is ultimately defeated, in one of cinema’s most poignant death scenes, and Dave moves on to the film’s – and humanity’s – next chapter.

“Essentially you have a film of this fear of artificial intelligence making humans obsolete,” says Trafton, the Seattle University lecturer. ”Yet what does the movie end with? It ends with a Star Child. It ends with human beings recycling back.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to ‘2001: A Space Odyssey’ turns 50: Why HAL endures
Read this article in
https://www.csmonitor.com/Technology/2018/0403/2001-A-Space-Odyssey-turns-50-Why-HAL-endures
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe