Science Fiction and Computing
The image of the mechanical brain or “knowledge engine” has a
surprisingly long history in Western literature. As far back as Jonathan
Swift’s Gulliver’s Travels (1726), we find a gigantic engine that can
create books on every conceivable subject. While this was a satirical
jab at thinkers who were ushering in a rational, mechanistic cosmos, the
idea that the cunning mechanical automatons being created for the
amusement of princes might someday think did not seem so far-fetched.
This belief would be strengthened in the coming two centuries by the
triumph of the Industrial Revolution. In Jules Verne’s Paris in the
Twentieth Century (written in 1863), giant calculating machines and
facsimile transmissions were used to coordinate business activities. As
early as the beginning of the 20th century, writers had been exploring
what might happen if some combination of artificial brains and robots
offered the possibility of catering to all human needs. In E. M.
Forster’s “The Machine Stops,” published in 1909, people no longer even
have to leave their insectlike cells because even their social needs are
provided through machine-mediated communication not unlike today’s
Internet. In the 1930s and 1940s, other writers such as John W. Campbell
and Jack Williamson wrote stories in which a worldwide artificial
intelligence became the end point of evolution, with humans either
becoming extinct or living static, pointless lives.
Science
fiction writers had also been considering the ramifications of a related
technology, robotics. The term robot came from Karel Cˇ apek’s R.U.R.
(Rossum’s Universal Robots). Although the robot had a human face, it
could have inhuman motives and threaten to become Earth’s new master,
displacing humans. Isaac Asimov offered a more benign vision, thanks to
the “laws of robotics” embedded in his machines’ very circuitry. The
first law states, “A robot shall not harm a human being or, through
inaction, cause a human being to come to harm.” In the real world, of
course, artificial intelligence had no such built-in restrictions (see
artificial intelligence).
Science fiction of the “Golden Age” of
the pulp magazines had only limited impact on popular culture as a
whole. Once actual computers arrived on the scene, however, they became
the subject for movies as well as novels. D. F. Jones’s novel Colossus:
The Forbin Project (1966), which became a film in 1970, combined cold
war anxiety with fear of artificial intelligence. Joining forces with
its Soviet counterpart, Colossus fulfills its orders to prevent war by
taking over and instituting a world government. Similarly, Hal in the
film 2001: A Space Odyssey (based on the work of Arthur C. Clarke) puts
its own instinct for self-preservation ahead of the frantic commands of
the spaceship’s crew. However, the artificial can also strive to be
human, as in the 2001 movie A.I. During the 1940s and 1950s science
fictional computers tended to be larger, more powerful versions of
existing mainframes, sometimes aspiring to godlike status. However, in
Murray Leinster’s book A Logic Named Joe (1946), a “Logic” is found in
every home, complete with keyboard and television screen. All the Logics
are connected to a huge relay circuit called the Tank, and the user can
obtain everything from TV broadcasts to weather forecasts or even the
answers to history trivia questions. Although the Logic is essentially
an electronic-mechanical system, its functionality is startlingly
similar to that achieved by the Internet almost half a century later.
Writers such as William Gibson (Neuromancer) and Vernor Vinge (True
Names) later began to explore the world mutually experienced by computer
users as a setting where humans could directly link their minds to
computer-generated worlds (see virtual reality). A new elite of
cyberspace masters were portrayed in a futuristic adaptation of such
archetypes as the cowboy gunslinger, samurai, or ninja. Unlike the
morally unambiguous world of the old western movies, however, the novels
and movies with the new “cyberpunk” sensibility are generally set in a
jumbled, fragmented, chaotic world. That world is often dominated by
giant corporations (reflecting concerns about economic globalism) and is
generally dystopian. Meanwhile as cyberspace continues to become
reality, cyberpunk has lost its distinctiveness as a genre. Gibson’s
latest work (and that of other writers such as Bruce Sterling and Vernor
Vinge) is more apt to explore ways of communicating and networking that
belong to just the day after tomorrow, if not not already appearing
(particularly among young people) today.