If you had a big enough construction set with
enough wheels, gears, and other bits and bobs, and a limitless supply of electronic components, could you bolt together a living, breathing, walking, talking robot as good as a human in every way?
That might sound like one question, but it's really several.
First, there's the matter of whether it's technically
possible to build a robot that compares with a human. But there's
also a much bigger question of why you'd want to do that and
whether it's even a useful thing to do. When humans can reproduce
so easily, why do we want to create clunky mechanical replicas of
ourselves? And if there really is a good reason for doing so, what's
the best way to go about it? In this article, we'll be taking
a detailed look at what robots are, how they're designed, and some
of the things they can do for us.
Photo: Our friends electric. Will robots replace
people in the future? Or will people and machines merge into flesh-machine hybrids
that combine the best of both worlds?
For all we know, Octavia (pictured here) is pondering these questions right now. She's an advanced social robot who can scuttle around on her wheels, pick up
objects, and pull a variety of emotional faces. Her biggest challenge
so far has been helping to
put out fires.
Photo by John F. Williams courtesy of US Navy and Wikimedia Commons.
Close your eyes and think "robot." What picture leaps to mind?
Most likely a fictional creature like R2-D2 or C-3PO from Star Wars. Very likely a
humanoid—a humanlike robot with arms, legs, and a head,
probably painted metallic silver. Unless you happen to work in robotics, I doubt you pictured a
mechanical snake or a clockwork cockroach, a bomb disposal robot, or a
Roomba robot vacuum cleaner.
What you pictured, in other words, would have been
based more on science fiction than fact, more on imagination than
reality. Where the sci-fi robots we see in movies and TV shows tend to
be humanoids, the humdrum robots working away in the world around us
(things like robotic welder arms in car-assembly plants) are much more
functional, much less entertaining. For some reason, sci-fi writers have an obsession with
robots that are little more than flawed, tin-can, replacement humans.
Maybe that makes for a better story, but it doesn't really reflect
the current state of robot technology, with its emphasis on developing practical
robots that can work alongside humans.
How do you build a robot?
Photo: Is this a robot? It certainly looks like one, but it has no senses of any kind, no electronic or mechanical onboard computer for thinking, and its limbs have no motors or other means to move themselves. With no perception, cognition, or action, it cannot be a robot—even if it looks like a robot.
Photo by Thom Quine courtesy of Wikimedia Commons published under a
Creative Commons licence.
If robots like C-3PO really did exist, how would anyone ever have
developed them? What would it have taken to make a general-purpose robot
similar to a human?
It's easy enough to write entertaining stories about intelligent robots taking control of the planet, but just try
developing robots like that yourself and see how far you get. Where
would you even start? Actually, where any robot engineer starts, by
breaking that one big problem into smaller and more manageable
chunks. Essentially, there are three problems we need to solve: how
to make our robot 1) sense things (detect objects in the world), 2) think
about those things (in a more less "intelligent" way, which is a
tricky problem we'll explore in a moment), and then 3) act on them (move
or otherwise physically respond to the things it detects and thinks
about).
In
psychology (the science of human behavior) and in robotics, these things
are called perception (sensing), cognition (thinking), and action
(moving). Some robots have only one or two. For example, robot
welding arms in factories are mostly about action (though they may
have sensors), while robot vacuum cleaners are mostly about
perception and action and have no cognition to speak of. As we'll see
in a moment, there's been a long and lively debate over whether robots really need cognition,
but most engineers would agree that a machine needs both perception and action to qualify as a robot.
Sponsored links
Perception (sensing)
We experience the world through our five senses,
but what about robots? How do they get a feel for the things around
them?
Vision
Humans are seeing machines: estimates vary wildly,
but there's general agreement that about 25–60 percent
of our cerebral cortex is devoted to processing
images from our eyes and building
them into a 3D visual model of the world. Now machine vision
is really quite simple: all you need to do to give a robot eyes
is to glue a couple of
digital cameras to its head. But
machine perception—understanding what the camera sees (a
pattern of orange and black), what it represents (a tiger), what that
representation means (the possibility of being eaten), and how
relevant it is to you from one minute to the next (not at all,
because the tiger is locked inside a cage)—is almost infinitely harder.
Like other problems in robotics, tackling
perception as a theoretical issue ("how does a robot see and
perceive the world?") is much harder than approaching it as a
practical problem. So if you were designing something like a Roomba
vacuum cleaning robot, you could spend a good few years
agonizing over how to give it eyes that "see" a room and navigate
around the objects it contains. Or you could forget all about
something so involved as seeing and simply use a giant,
pressure-sensitive bumper. Let the robot scrabble around until the bumper hits
something, then apply the brakes and tell it to creep away in a
different direction.
Photo: Look no eyes! Spot, a quadruped robot built by Boston Dynamics, has a lidar (a kind of laser radar) where you'd expect its head to be (the small gray box at the front). Photo by Sgt. Eric Keenan courtesy of US Marine Corps.
Perception, in other words, doesn't have to mean
vision. And that's a very important lesson for ambitious projects
such as self-driving (robotic) cars. One way to build a self-driving
car would be to create a super-lifelike humanoid robot and stick it
in the driving seat of an ordinary car. It would drive in exactly the
same way as you or I might do: by looking out through the windshield
(with its digital camera eyes), interpreting what it sees, and
controlling the car in response with its hands and feet. But you
could also build a self-driving car an entirely different way without
anyone in the driving seat—and this is how most robotics engineers
have approached the problem. Instead of eyes, you'd use things like
GPS satellite navigation,
lidar,
sonar,
radar, and infrared detectors,
accelerometers—and any number of other sensors to build up a very
different kind of picture of where the car is, how it's proceeding in
relation to the road and other cars, and what you need to do next to
keep it safely in motion. Drivers see with their eyes; self-driving
cars see with their sensors. A driver's brain builds a moving 3D
model of the road; self-driving cars have computers, surfing a
flood of digital data quite unlike a human's mental model.
That doesn't mean there's no similarity at all.
It's quite easy to imagine a neural network
(a computer simulation of interconnected brain cells that can be trained
to recognize patterns) processing information from a self-driving car's sensors
so the vehicle can recognize situations like driving behind a learner,
spotting a looming emergency when children are playing ball by the side of
the road, and other danger signs that experienced drivers recognize automatically.
Hearing
Just as seeing is a misnomer when it comes to
machine vision, so the other human senses (hearing, smell, taste, and
touch) don't have exact replicas in the world of robotics. Where a
person hears with their ears, a robot uses a
microphone to convert
sounds into electrical signals that can be digitally processed. It's
relatively straightforward to sample a sound signal, analyze the
frequencies it contains (for example, using a mathematical
descrambling trick called a Fourier transform), and compare the
frequency "fingerprint" with a list of stored patterns. If the
frequencies in your signal match the pattern of a human scream, it's
a scream you're hearing—even if you're a robot and a scream means
nothing to you.
There's a big difference between hearing simple
sounds and understanding what a voice is saying to you, but even that
problem isn't beyond a machine's capability. Computers have been
successfully turning human speech into recognizable text for decades;
even my old PC, with simple, off-the-shelf,
voice recognition
software, can listen to my voice and faithfully print my words on the
screen. Interpreting the meaning of words is a very
different thing from turning sounds into words in the first place,
but it's a start.
Smell
You might think building a robotic nose is more of
a technical challenge, but it's just a matter of
building the right sensor. Smell is effectively a chemical
recognition system: molecules of vapor from a bacon butty,
a yawning iris, or the volatile liquids in perfume drift into our noses
and bind onto receptive cells, stimulating them electro-chemically. Our
brains do the rest. The way our brains are built explains some of
their highly unusual features, such as why smells are powerful memory
triggers. (The answer is simply because the bits of our brain that process smells are
physically very close to two other key bits of our brains,
namely the hippocampus, a kind of "crossroads" in our memory, and amygdala, where
emotions are processed.)
So, in the words of the old joke, if robots have no nose, how do they smell?
We have plenty of machines that can recognize
chemicals, including
mass spectrometers and
gas chromatographs, but
they're elaborate, expensive, and unwieldy; not the sorts of things
you could easily stuff up a nose. Nevertheless, robot scientists have
successfully built simpler electrochemical detectors that resemble
(at least, conceptually) the way the human nose converts smells into
electrical signals. Once that job's done, and the sensor has produced
a pattern of digital data, all you're left with is a computational
problem; not, "what does this smell like?", but what does this
data pattern represent? It's exactly like seeing or hearing: once the signals
have left your eyes, ears, or nose, and reached your brain, the problem is
simply one of pattern recognition.
Other senses
Photo: Engineers can building amazingly realistic
prosthetic hands. If we could modify these things with touch sensors, maybe they could double up as working hands for robots? Photo by Sarah Fortney courtesy of
US Navy.
Although robots have had arms and primitive grabber claws
for over half a century, giving them anything like a working human hand
has proved far more of a challenge.
Imagine a robot that can play Beethoven sonatas like a concert pianist,
perform high-precision brain surgery, carve stone like a sculptor,
or a thousand other things we humans can do with our touch-sensitive hands.
As the New York Times reported in September 2014,
building a robot with human touch
has suddenly become one of the most interesting problems in robotics research.
Taste, too, boils down simply to using appropriate chemical sensors.
If you want to build a food-tasting robot, a pH meter would be a
good starting point, perhaps with something to measure viscosity (how
easily a fluid flows). Then again, if you've already given your robot
eyes and a nose, that would go a long way to giving it taste, because the look
and smell of food play a big part in that.
One of the misleading things about trying to develop a humanoid
robot is that it tricks us into replicating only the five basic human senses—and one
of the great things about robots is that they can use any kind of sensor or
detector we can lay our hands on. There's no need at all for robot vision
to be confined to the ordinary visible spectrum of light: robots
could just as easily see X rays or infrared (with heat detectors).
Robots could also navigate like homing pigeons by following Earth's
magnetic field or (better still) by using GPS to track their precise
position from one moment to the next. Why limit ourselves to human
limitations?
Cognition (thinking)
Thinking about thinking is a recipe for doing not
much at all—other than thinking; that's the occupational hazard of
philosophers. And if that sounds fatuous, consider all the books and
scientific articles that have been published on
artificial intelligence since British computer scientist Alan Turing developed
what is now called the Turing test (a way of establishing whether a machine is "intelligent") in 1950. Psychologists, philosophers, and computer
scientists have been wrestling with definitions of "intelligence" ever since.
But that hasn't necessarily got them any nearer to developing an intelligent machine.
"No cognition. Just sensing and action. This is
all I would build, and completely leave out... the intelligence
of artificial intelligence."
Rodney A. Brooks, Robot: The Future of Flesh and Machines, p.36.
As British robot engineer Kevin Warwick pointed out about 20 years
ago in his book March of the Machines, "intelligence" is an inherently human concept.
Just because there's a lofty view from where people happen to be
standing, it doesn't follow that there aren't better views from elsewhere.
Human intelligence tests measure your
ability to do well at human intelligence tests—and don't
necessarily translate into an ability to do useful things in the real
world. Developing computer-controlled machines that humans would
regard as intelligent is not really the goal of modern robotics
research. The real objective is to produce millions of machines that can work
effectively alongside billions of humans, either augmenting our abilities or
doing things we simply don't want to do; we don't automatically need
"intelligence" for that. According to pragmatic engineers like Warwick, robots should be assessed on their
own terms against the specific tasks they're designed for, not
according to some fuzzy, human concept of "intelligence"
designed to flatter human self-esteem. Is a robot intelligent?
Who cares if it does the job we need it to do, maybe better than a
person would.
Emotional intelligence
Photo: Robots are designed with friendly faces so
humans don't feel threatened when they work alongside them. This one is called Emo
and it lives at Think Tank, the science museum in Birmingham, England.
Its digital-cameras eyes help it to learn and recognize human expressions, while the
rubber-tube lips allow it to smile and make expressions of its own.
Whether they're deemed intelligent or not,
computers and robots are quintessentially logical and rational where
humans are more emotional and inconsistent. Developing robots that
are emotional—particularly ones that can sense and respond to human
emotions—is arguably much more important than making intelligent
machines. Would you rather your coworkers were cold, logical,
hyperintelligent beings who could solve every problem and never make
a mistake? Or friendly, easy-going, pleasant to pass time with, and
fallibly human? Most people would probably chose the latter, simply
because it makes for more effective teamwork—and that's how
most of us generally get things done. So developing a likeable robot that has
the ability to listen, smile, tell jokes round the water cooler,
and sympathize when your life takes a turn for the worse
is arguably just as important as making one that's clever. Indeed, one
of the main reasons for developing humanoid robots is not to
replicate human emotions but to make machines that people don't feel
scared or threatened by—and building robots that can make eye
contact, chuckle, or smile is a very effective way to do that.
Emotion is often in the eye of the beholder—especially when it comes to humans and machines.
When people look at cars, they tend to see faces (two headlights for eyes, a radiator grille
for a mouth) or link particular emotions with certain colours of
paintwork (a red car is racy, a black one is dark and mysterious, a
silver one is elegant and professional). In much the same way, people
project feelings onto robots simply because of how they look or
move: the robot has no emotions; the emotions it conjurs up are
entirely in your mind. One of the world's leading robot engineers,
MIT's Rodney Brooks, tells a story of how he was involved in the
development of a robotic baby toy so lifelike that it provoked
sincere feelings of attachment in the adults and children who looked after
it. Kismet, an "emotional robot" developed in the late 1990s by Cynthia Breazeal,
one of his students, listens, coos, and pays attention to humans in
a startlingly babylike way—to the extent that people grow very attached to
it, as a parent to a child. Again, the robot has nothing like human emotions; it simply
provokes an authentic emotional reaction in humans and we interpret
our own feelings as though the robot were emotional too. In
other words, we might redefine the problem of developing emotional robots as
making machines that humans really care about.
Action (doing)
How a robot moves and responds to the world is the
most important thing about it. Intelligent machines that sense and
think but don't move or respond hardly qualify as robots;
they're really just computers. Action is a much more complex problem
than it might seem, both in humans and machines. In humans, the sheer
number of muscles, tendons, bones, and nerves in our limbs make
coordinatred, accurate body control a logistical nightmare.
There's nothing easier than lifting your hand to scratch your nose—your brain makes
it seem to easy—but if we try to replicate this sort of behavior
in a machine, we instantly realize how difficult it is. That's one reason why,
until relatively recently, virtually all robots moved around on
wheels rather than fully articulated human legs
(wheels are generally faster and more reliable, but hopeless at managing rough terrain or stairs).
Photo: Robotic Hexapod uses six spinning legs to negotiate rough, rocky terrain that wheels
would struggle to cross. It can also use its legs to swim! Photo by Robbin Cresswell courtesy of
US Air Force.
Just because a robot has to move, it doesn't
follow that it has to move like a person. Factory robots are designed
around giant electric, hydraulic, or pneumatic arms fitted with various tools
geared to specific jobs, like painting, welding, or laser-cutting
fabric. No human can swivel their wrist through 360 degrees, but factory
robots can; there's simply no good reason to be bound by human
limitations. Indeed, there's no reason why robots have to act (move)
like humans at all. Virtually every other animal you can think of,
from salamanders and sharks to snakes and turkeys, has been
replicated in robot form: it often makes much more sense for robots
to scuttle round like animals than prance about like people.
By the same token, making
"emotional robots" (ones to which people feel emotions) doesn't
necessarily have to mean building humanoids. That explains the
instant success of Sony's robotic AIBO dogs, launched in 1999. They
were essentially robotic pets onto which people projected their need
for companionship.
Photo: Robots don't have to look or work like humans. This is
BigDog, the infamous robotic "pack-mule" designed for the US military by Boston Dynamics. Where most robots are electrically powered,
this one is driven by four hydraulic legs powered by a small internal combustion engine from a go-kart.
In theory, that gives it a big advantage over robots powered by batteries (it should be able to
go much further); in practice, its official range is just 32km (20 miles). Photo by Kyle J. O. Olson courtesy of
US Marine Corps.
Human perception and cognition are hard things for robots to emulate, partly
because it's easy to get bogged down in abstract and theoretical arguments
about what these terms actually mean. Action is a much simpler problem: movement is movement—we don't have to worry about defining it, the same way we worry over "intelligence," for example. Ironically, though we admire the remarkable grace of a ballet dancer, the leaps and bounds of a world-class athlete, or the painstaking care of a traditional craftsman, we take it for granted that robots will be able to zing about or make things for us with even greater precision. How do they manage it? Some use hydraulics. Most, however, rely on relatively simple, much more afforable electric stepper motors and servo
motors, which let a robot spin its wheels or swing its limbs with pinpoint control.
Unlike humans, which get tired and make mistakes, robot moves are reliably repeatable; robots get it right every time.
What are robots actually like?
Real-world robots fall into two broad categories.
Most are task-specific robots, designed to do one job and repeat it over
and over again. Hardly any are general-purpose robots
capable of doing a wide variety of jobs (in the way that humans are
general-purpose flesh-and-blood machines). Indeed, those
multi-purpose robots are still pretty much confined to robotics
labs.
Robot arms
Photo: It might never have occurred to you that a robot built the
car you're driving today. This Jaguar assembly robot (a Kawasaki ZX165U) is a demonstration model
at Think Tank, the Birmingham science museum. It can lift loads of up to 300kg and reach
up to 3.5m (11.5ft)—quite a bit more than a human arm!
Riveting and welding, swinging and sparking—most of the world's
robots are high-powered arms, like the ones you see in car factories. Although they became popular in the 1970s, they were
invented in the 1950s and first widely deployed in the 1960s by companies such as General
Motors. The original robot arm, Unimate, made its debut on the Johnny
Carson show back in 1966. Modern robot arms have more degrees of freedom
(they can be turned or rotated in more ways) and can be controlled much more precisely.
Whether robot arms really qualify as robots is a
moot point. Many of them lack much in the way of perception or
cognition; they're simply machines that repeat preprogrammed actions.
Fast, strong, powerful, and dangerous, they're usually fenced off in safety cages
and seldom work anywhere never people (a recent article in the
New York Times
noted that 33 people have been killed by robots in the United States during
the last 30 years). A few years ago, Rodney Brooks reinvented the whole idea of the robot arm with an
affordable ($25,000), easy-to-use, user-friendly industrial robot called
Baxter, which evolved into a similar machine named Sawyer.
It can be "trained" (Brooks avoids the word "reprogrammed") simply by moving its limbs, and it has enough onboard
sensory perception and cognition to work safely alongside humans,
sharing (for example) exactly the same assembly line.
Photo: Robot arms are versatile, precise, and—unlike human factory workers—don't
need rest, sleep, or holiday. But "all work and no play..." So this one is learning to play drums
for a change, at Think Tank, the Birmingham science museum.
Remote-controlled (teleoperated) machines
Some of the machines we think of as robots are
nothing of the kind: they merely appear robotic (and intelligent)
because humans are controlling them remotely. Bomb
disposal robots work this way: they're simply robot trucks with
cameras and manipulator arms operated by joysticks. Until recently,
space-exploration robots were designed much the same way, though autonomous
rovers (with enough onboard cognition to control themselves) are now commonplace.
So while 1997's Mars Sojourner (from the Pathfinder Mission) was semi-autonomous and
largely remote-controlled from Earth, the much bigger and newer
Mars Spirit and Opportunity rovers (launched in 2003) are far more autonomous.
Photo: Bomb-disposal robots are almost always remote-controlled. This one, Explosive Ordnance Disposal Mobile Unit (EODMU) 8, can pick up suspect devices with its jaw and carry them to safety.
Photo by Joe Ebalo courtesy of US Navy.
Semi-autonomous household robots
If you've got a robot in your home, most likely
it's a robot vacuum cleaner or lawn mower. Although these machines
give the impression that they're autonomous and semi-intelligent,
they're much simpler (and less robotic) than they appear. When you
switch on a Roomba, it doesn't have any idea about the room it's
cleaning—how big it is, how dirty it is, or the layout of the
furniture. And, unlike a human, it doesn't attempt to build itself a
mental model of the room as it's going along. It simply bounces off
things randomly and repeatedly, working on the (correct) assumption that if it
does this for long enough, the room will be fairly clean in the end.
There are a few extra little tweaks, including a spiraling,
on-the-spot cleaning mode that kicks in when a "dirt detect" sensor
finds concentrated debris, and the ability to follow edges. But essentially, a
Roomba cleans at random. Robot lawn mowers work in a somewhat similar
way (sometimes with a tether to stop them straying too far).
Photo: NASA's FIDO was one of its first semi-autonomous robot rovers. Onboard cameras allow space scientists to control it remotely from Earth. Photo courtesy of NASA JPL Planetary Robotics Laboratory and NASA on the Commons.
General-purpose robots
Although advanced robots like Baxter can be
trained to do many different things, they're still essentially
single-domain machines. Whether they're picking out badly
formed machine parts for quality control or shifting boxes from one
place to another, they're designed only to work on factory floors. We
still don't have a robot that can make the breakfast, take the kids to
school, drive itself to work somewhere else, come back home again, clean
the house, cook the dinner, and put itself on recharge—unless you
count your husband or wife.
Back in the 1990s, when Kevin Warwick wrote his bestselling book March
of the Machines, building intelligent, autonomous, general-purpose robots was
considered an overly ambitious research goal. Engineers like Warwick typified a
"hands-on" alternative approach to robotics, where grand plans were put aside and
robots simply evolved as their creators figured out better ways of building
robots with more advanced perception, cognition, and action. It's more like robot evolution,
working from the bottom up to develop increasingly advanced
creatures, than any sort of top-down approach that might be conceived
by a kind of robot-world equivalent of God.
Roll time forwards, however, and much has changed. Although engineers like
Kevin Warwick and Rodney Brooks are still champions of the pragmatic,
bottom-up, minimal-cognition approach, elsewhere, general-purpose autonomous
robots are making great strides forward—often literally, as well as metaphorically. The US
Defense Department's research wing, DARPA, has sponsored competitions
to develop humanoid robots that can cope
with a variety of tricky emergency situations, such as rescuing
people from natural disasters. (DARPA claims the intention is
humanitarian, but similar technology seems certain to be used in robotic
soldiers.) Thanks to video sites such as YouTube, robots like these, which would once
have been top-secret, have been "growing up in public"—with each
new incarnation of the stair stomping, chair balancing, car driving
robots instantly going viral on social media.
Self-driving cars
Photo: A cutting-edge, self-driving Lincoln MKZ packed with sensors,
including roof-mounted lidar, GPS, and radar.
Photo by Jake McClung courtesy of US Marine Corps.
Self-driving cars are a different flavor of general-purpose, autonomous robot. But
they've yet to catch the public's imagination in quite the same way,
perhaps because they've been developed more quietly, even secretly,
by companies such as Google. Now you could argue that there's nothing
remotely general-purpose about driving a car: it involves a robot
operating successfully in a single domain (the highway) in just the
same way as a Baxter (on the factory floor) or a Roomba (cleaning your home).
But the sheer complexity of driving—even humans take years to
properly master it—makes it, arguably, as much of a general-purpose
challenge as the one the DARPA robots are facing. Think of all the different
things you have to learn as a driver: starting off, stopping at a
signal, turning a corner, overtaking, parallel parking, slowing down
when the car in front indicates, emergency stops... to say nothing of
driving at daytime or night, in all kinds of weather, on every kind
of country road and superhighway. Maybe it would be easier just to
stick a humanoid robot in the driving seat after all.
Our robot future
There's no unknowing the things we learn. Technologies cannot be invented. The
march of the robots is unstoppable—but quite where they're
marching to, no-one yet knows. Futurologists like
Raymond Kurzweil
believe humans and machines will merge after we reach a point called
the singularity, where vastly powerful machines become more
intelligent than people. Humans will download their minds to
computers and zoom into the future, not in the "bodiless exultation" of
cyberspace (as William Gibson once put it) but in a steel and plastic doppelganger: a machine-body
powered by the immortal essence of a human mind.
More pragmatic, less dramatic scientists such as Rodney Brooks see a quieter form of
evolution where the last few decades of robotic technology begin to
augment what millions of years of natural selection have already cobbled together.
Brooks argues that we've been on this path for years, with
advanced prosthetic limbs, heart pacemakers, cochlear implants for
deaf people, robot "exoskeletons" that paralyzed people can slip over their bodies to
help them walk again, and (before much longer) widely available artificial
retinas for the blind. There will be no revolutionary jump from
human to robot but a smarter, smoother transition from flesh machines to
hybrids that are part human and part robot. Will robots take over
from people? Not according to Brooks: "Because there won't be any us
(people) for them (pure robots) to take over from... We (the robot
people) will be a step ahead of them (the pure robots). We won't have
to worry about them taking over."
A brief history of robots
Photo: PUMA is one of the world's best-known robot arms,
developed from Vic Scheinman's Vicarm in 1978.
Photo courtesy of NASA Ames Research Center.
When robots look back on their lives, what milestones spring to their computerized minds?
Here are some of the key moments in the long and continuing history of robotkind!
~100s AD:
Ancient Greeks invent automata (self-controlled machines). Hero of Alexandria uses hydraulics,
pneumatics, and steam power to construct all kinds of automatic machines, from self-closing doors to a
primitive robotic cart.
1739: French inventor Jacquard de Vaucanson builds an elaborate
mechanical duck with a
working digestion system that can eat and produce "feces."
1818: Mary Shelley's novel Frankenstein raises the terrifying prospect of scientists creating
monsters that run out of control—still a major concern when most people think about robots today.
1920: Czech playwright Karel Čapek coins the word "robot" in his play
R.U.R. (Rossum's Universal Robots) .
1927: Fritz Lang's movie Metropolis shows robots in a bleakly dystopic, urban future.
1912: John Hammond, Jr. and Benjamin Miessner build an
electric dog that senses
and responds to light signals.
1956: Devol meets physicist
Joseph Engelberger and the two discuss working together to develop factory robots.
Their efforts ultimately lead to the formation of Unimation, a company that pioneers the manufacture of industrial robots by cooperating closely with companies such as General Motors (GM).
1962: GM installs its first industrial robot at a plant in Trenton, New Jersey.
1964: At Stanford Artificial Intelligence Laboratory (SAIL), PhD student
Rodney Schmidt (with help from artificial intelligence pioneer John
McCarthy) constructs a self-driving car based on a simple mechanical cart. Initially just remote controlled, it evolves into an
autonomous (but very primitive) self-driving car that can follow a painted white line.
1966: Factory robots capture the public imagination after a Unimate
appears on the Johnny Carson TV Show, demonstrating how to hit a
golf ball and pour a glass of beer.
1967: GM deploys 26 Unimate welding robots at its plant in Lordstown, Ohio, provoking industrial unrest among disruntled factory workers.
1970: Ira Levin's bestselling humorous novel The Stepford Wives
imagines a world where independent women are quietly replaced by zombie-like
robots who happily do their husband's bidding.
1972: British military engineers develop the
Wheelbarrow, a
remote-controlled robot on tracks that can investigate booby-trapped
vehicles, buildings, and packages.
1973: Vic Scheinman starts Vicarm Inc. to manufacture industrial robot arms. In 1977, he sells the design to Unimation.
1978: Unimation develops Scheinman's robot into the PUMA (Programmable Universal Machine for Assembly). Unlike earlier robot arms, which are heavy hydraulic machines, it's compact, light, easy to program,
and powered by electric motors.
1997: 40 teams compete in the inaugural Robot World Cup (RoboCup): a soccer competition just for robots.
1998: Reading University robotics Professor Kevin Warwick becomes a cyborg
by having robotic circuitry implanted into his body.
1999: Sony introduces the AIBO robot dog, but discontinues production in 2006.
2002: iRobot launches the Roomba robot vacuum cleaner. (The key patents are filed
between 2000 and 2002.)
2004: iCub, a European-funded humanoid robot, the size of a small child, is released as an open-source project. Around 30 different iCubs are
built by academics and used for researching artificial intelligence
and robot emotions.
2004: DARPA launches the Grand Challenge—a competition to encourage engineers
to develop self-driving cars.
2005: Boston Dynamics creates BigDog, a computer-controlled robotic "pack mule"
designed to carry loads for soldiers. Later military robots include
Cheetah, PETMAN, and Atlas.
2012: Rethink Robotics, a company founded by Rodney Brooks, introduces the
Baxter factory robot.
2013: The SCHAFT S1 humanoid robot wins the trial stage of the DARPA Robotics Challenge to develop robots for emergency humanitarian work and disaster relief.
2015: Final of the DARPA Robotics Challenge.
2017: The European Parliament launches a thought-provoking draft report, urging governments to consider
wide-ranging issues like who should be responsible for robots, what rights they should have, and
how they will impact various aspects of "human" life we now take for granted.
2018: Robots help out at the Winter Olympics in South Korea.
Robotics: Modelling, Planning and Control by Bruno Siciliano, Lorenzo Sciavicco, and Luigi Villani. Springer, 2009. A much more theoretical introduction to robot control.
The Robotics Primer by Maja J. Matarić. MIT Press, 2007. An accessible easy-to-understand overview suitable for most readers.
Flesh and Machines: How Robots Will Change Us by Rodney A. Brooks. Vintage, 2003. A fascinating recent history of robotics, with an emphasis on the projects Brooks has been personally involved with (such as Cog, Kismet, Roomba, and Baxter).
March of the Machines by Kevin Warwick. Century, 1997. An old but still very interesting read, particularly for the way it describes how Warwick's robots have evolved from the bottom-up (with an emphasis on perception and action rather than cognition).
The Computer and the Mind by Philip Johnson-Laird. Fontana, 1993. How would you approach building a machine that could behave in human-like ways?
This wonderful book covers similar ground to my article but in much greater theoretical depth, with a strong emphasis on cognitive psychology.
Hobbyist/practical books
Robot Wars: Build Your Own Robot by James Cooper. Haynes, 2017. A spin-off from the popular series, geared mainly to remote-control robots rather than autonomous ones.
The Robot Builder's Bonanza by Gordon Mccomb and Myke Predko. McGraw Hill, 2006. A hands-on guide to robot hacking for hobbyists packed with ideas for robot projects.
123 Robotics Experiments for the Evil Genius by Michael Predko and Myke Predko. McGraw-Hill Professional, 2004. After a brief introduction to robotics, the Predko's get straight to work with toilet paper, glue, nuts, bolts, and anything else they can find.
For younger readers
Robot by Clive Gifford et al. DK, 2018. A lavishly illustrated, 160-page guide for ages 9–11 that features over 100 different robots.
Robots by Melissa Stewart. National Geographic, 2018. A colorful 48-page introduction for younger readers aged 6–9.
Ultimate Robot by Robert Malone. DK, 2004. Combines history, science, and technology in a visually attractive format that will appeal to younger teenagers, in particular.
Remaking the World for Robots by Stacey Higginbotham. IEEE Spectrum, July 24, 2019. Why we'll need to redesign our world for a future where robots are more common.
How to Beat the Robots by
Claire Cain Miller. The New York Times, March 7, 2017. How can people compete in a world where artificially intelligent robots seem better qualified for more and more jobs?
Robots and cars for the future: BBC News, 26 June 2009. Ian Hardy visits the famous MIT Media Lab, where tomorrow's robots are being developed.
Videos
The best way to learn about cutting-edge robots is to watch them in action. So here's a small collection of 10 short videos I've compiled from YouTube (and elsewhere) that illustrate the past, present, and future of robotics. As you watch these films, try to imagine the engineering challenges the robot designers have had to solve in each case:
A robot nose: Could a robot ever learn to smell? Apparently, yes!
Robots inspired by animals: Why should robots be modeled on humans? Here's a great summary of robotic creatures inspired by other marvels from the natural world.
Robot octopuses and boneless robots show how innovative materials could make robots that will go to places no ordinary, metal, "mechanical" robot ever could.
Please do NOT copy our articles onto blogs and other websites
Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties.