You are here: Home page > Tools, instruments, and measurement > Robots

Closeup of the head of Octavia, a social robot.

Robots

If you had a big enough construction set with enough wheels, gears, and other bits and bobs, and a limitless supply of electronic components, could you bolt together a living, breathing, walking, talking robot as good as a human in every way?

That might sound like one question, but it's really several. First, there's the matter of whether it's technically possible to build a robot that compares with a human. But there's also a much bigger question of why you'd want to do that and whether it's even a useful thing to do. When humans can reproduce so easily, why do we want to create clunky mechanical replicas of ourselves? And if there really is a good reason for doing so, what's the best way to go about it? In this article, we'll be taking a detailed look at what robots are, how they're designed, and some of the things they can do for us.

Photo: Our friends electric. Will robots replace people in the future? Or will people and machines merge into flesh-machine hybrids that combine the best of both worlds? For all we know, Octavia (pictured here) is pondering these questions right now. She's an advanced social robot who can scuttle around on her wheels, pick up objects, and pull a variety of emotional faces. Her biggest challenge so far has been helping to put out fires. Photo by John F. Williams courtesy of US Navy and Wikimedia Commons.

Sponsored links

Contents

  1. Imaginary friends
  2. How do you build a robot?
  3. Perception (sensing)
  4. Cognition (thinking)
  5. Action (doing)
  6. What are robots actually like?
  7. Our robot future
  8. A brief history of robots
  9. Find out more

Imaginary friends

Close your eyes and think "robot." What picture leaps to mind? Most likely a fictional creature like R2-D2 or C-3PO from Star Wars. Very likely a humanoid—a humanlike robot with arms, legs, and a head, probably painted metallic silver. Unless you happen to work in robotics, I doubt you pictured a mechanical snake or a clockwork cockroach, a bomb disposal robot, or a Roomba robot vacuum cleaner.

What you pictured, in other words, would have been based more on science fiction than fact, more on imagination than reality. Where the sci-fi robots we see in movies and TV shows tend to be humanoids, the humdrum robots working away in the world around us (things like robotic welder arms in car-assembly plants) are much more functional, much less entertaining. For some reason, sci-fi writers have an obsession with robots that are little more than flawed, tin-can, replacement humans. Maybe that makes for a better story, but it doesn't really reflect the current state of robot technology, with its emphasis on developing practical robots that can work alongside humans.

How do you build a robot?

Stressed-looking toy robot by Thom Quine.

Photo: Is this a robot? It certainly looks like one, but it has no senses of any kind, no electronic or mechanical onboard computer for thinking, and its limbs have no motors or other means to move themselves. With no perception, cognition, or action, it cannot be a robot—even if it looks like a robot. Photo by Thom Quine courtesy of Wikimedia Commons published under a Creative Commons licence.

If robots like C-3PO really did exist, how would anyone ever have developed them? What would it have taken to make a general-purpose robot similar to a human?

It's easy enough to write entertaining stories about intelligent robots taking control of the planet, but just try developing robots like that yourself and see how far you get. Where would you even start? Actually, where any robot engineer starts, by breaking that one big problem into smaller and more manageable chunks. Essentially, there are three problems we need to solve: how to make our robot 1) sense things (detect objects in the world), 2) think about those things (in a more less "intelligent" way, which is a tricky problem we'll explore in a moment), and then 3) act on them (move or otherwise physically respond to the things it detects and thinks about).

In psychology (the science of human behavior) and in robotics, these things are called perception (sensing), cognition (thinking), and action (moving). Some robots have only one or two. For example, robot welding arms in factories are mostly about action (though they may have sensors), while robot vacuum cleaners are mostly about perception and action and have no cognition to speak of. As we'll see in a moment, there's been a long and lively debate over whether robots really need cognition, but most engineers would agree that a machine needs both perception and action to qualify as a robot.

Sponsored links

Perception (sensing)

We experience the world through our five senses, but what about robots? How do they get a feel for the things around them?

Vision

Humans are seeing machines: estimates vary wildly, but there's general agreement that about 25–60 percent of our cerebral cortex is devoted to processing images from our eyes and building them into a 3D visual model of the world. Now machine vision is really quite simple: all you need to do to give a robot eyes is to glue a couple of digital cameras to its head. But machine perception—understanding what the camera sees (a pattern of orange and black), what it represents (a tiger), what that representation means (the possibility of being eaten), and how relevant it is to you from one minute to the next (not at all, because the tiger is locked inside a cage)—is almost infinitely harder.

Like other problems in robotics, tackling perception as a theoretical issue ("how does a robot see and perceive the world?") is much harder than approaching it as a practical problem. So if you were designing something like a Roomba vacuum cleaning robot, you could spend a good few years agonizing over how to give it eyes that "see" a room and navigate around the objects it contains. Or you could forget all about something so involved as seeing and simply use a giant, pressure-sensitive bumper. Let the robot scrabble around until the bumper hits something, then apply the brakes and tell it to creep away in a different direction.

Spot quadruped robot navigates a ditch.

Photo: Look no eyes! Spot, a quadruped robot built by Boston Dynamics, has a lidar (a kind of laser radar) where you'd expect its head to be (the small gray box at the front). Photo by Sgt. Eric Keenan courtesy of US Marine Corps.

Perception, in other words, doesn't have to mean vision. And that's a very important lesson for ambitious projects such as self-driving (robotic) cars. One way to build a self-driving car would be to create a super-lifelike humanoid robot and stick it in the driving seat of an ordinary car. It would drive in exactly the same way as you or I might do: by looking out through the windshield (with its digital camera eyes), interpreting what it sees, and controlling the car in response with its hands and feet. But you could also build a self-driving car an entirely different way without anyone in the driving seat—and this is how most robotics engineers have approached the problem. Instead of eyes, you'd use things like GPS satellite navigation, lidar, sonar, radar, and infrared detectors, accelerometers—and any number of other sensors to build up a very different kind of picture of where the car is, how it's proceeding in relation to the road and other cars, and what you need to do next to keep it safely in motion. Drivers see with their eyes; self-driving cars see with their sensors. A driver's brain builds a moving 3D model of the road; self-driving cars have computers, surfing a flood of digital data quite unlike a human's mental model. That doesn't mean there's no similarity at all. It's quite easy to imagine a neural network (a computer simulation of interconnected brain cells that can be trained to recognize patterns) processing information from a self-driving car's sensors so the vehicle can recognize situations like driving behind a learner, spotting a looming emergency when children are playing ball by the side of the road, and other danger signs that experienced drivers recognize automatically.

Hearing

Just as seeing is a misnomer when it comes to machine vision, so the other human senses (hearing, smell, taste, and touch) don't have exact replicas in the world of robotics. Where a person hears with their ears, a robot uses a microphone to convert sounds into electrical signals that can be digitally processed. It's relatively straightforward to sample a sound signal, analyze the frequencies it contains (for example, using a mathematical descrambling trick called a Fourier transform), and compare the frequency "fingerprint" with a list of stored patterns. If the frequencies in your signal match the pattern of a human scream, it's a scream you're hearing—even if you're a robot and a scream means nothing to you.

There's a big difference between hearing simple sounds and understanding what a voice is saying to you, but even that problem isn't beyond a machine's capability. Computers have been successfully turning human speech into recognizable text for decades; even my old PC, with simple, off-the-shelf, voice recognition software, can listen to my voice and faithfully print my words on the screen. Interpreting the meaning of words is a very different thing from turning sounds into words in the first place, but it's a start.

Smell

You might think building a robotic nose is more of a technical challenge, but it's just a matter of building the right sensor. Smell is effectively a chemical recognition system: molecules of vapor from a bacon butty, a yawning iris, or the volatile liquids in perfume drift into our noses and bind onto receptive cells, stimulating them electro-chemically. Our brains do the rest. The way our brains are built explains some of their highly unusual features, such as why smells are powerful memory triggers. (The answer is simply because the bits of our brain that process smells are physically very close to two other key bits of our brains, namely the hippocampus, a kind of "crossroads" in our memory, and amygdala, where emotions are processed.)

So, in the words of the old joke, if robots have no nose, how do they smell? We have plenty of machines that can recognize chemicals, including mass spectrometers and gas chromatographs, but they're elaborate, expensive, and unwieldy; not the sorts of things you could easily stuff up a nose. Nevertheless, robot scientists have successfully built simpler electrochemical detectors that resemble (at least, conceptually) the way the human nose converts smells into electrical signals. Once that job's done, and the sensor has produced a pattern of digital data, all you're left with is a computational problem; not, "what does this smell like?", but what does this data pattern represent? It's exactly like seeing or hearing: once the signals have left your eyes, ears, or nose, and reached your brain, the problem is simply one of pattern recognition.

Other senses

Brain-controlled prosthetic hand pictured with its realistic cosmetic skin covering alongside.

Photo: Engineers can building amazingly realistic prosthetic hands. If we could modify these things with touch sensors, maybe they could double up as working hands for robots? Photo by Sarah Fortney courtesy of US Navy [Via Wayback Machine].

Although robots have had arms and primitive grabber claws for over half a century, giving them anything like a working human hand has proved far more of a challenge. Imagine a robot that can play Beethoven sonatas like a concert pianist, perform high-precision brain surgery, carve stone like a sculptor, or a thousand other things we humans can do with our touch-sensitive hands. As the New York Times reported in September 2014, building a robot with human touch has suddenly become one of the most interesting problems in robotics research.

Taste, too, boils down simply to using appropriate chemical sensors. If you want to build a food-tasting robot, a pH meter would be a good starting point, perhaps with something to measure viscosity (how easily a fluid flows). Then again, if you've already given your robot eyes and a nose, that would go a long way to giving it taste, because the look and smell of food play a big part in that.

One of the misleading things about trying to develop a humanoid robot is that it tricks us into replicating only the five basic human senses—and one of the great things about robots is that they can use any kind of sensor or detector we can lay our hands on. There's no need at all for robot vision to be confined to the ordinary visible spectrum of light: robots could just as easily see X rays or infrared (with heat detectors). Robots could also navigate like homing pigeons by following Earth's magnetic field or (better still) by using GPS to track their precise position from one moment to the next. Why limit ourselves to human limitations?

Cognition (thinking)

Thinking about thinking is a recipe for doing not much at all—other than thinking; that's the occupational hazard of philosophers. And if that sounds fatuous, consider all the books and scientific articles that have been published on artificial intelligence since British computer scientist Alan Turing developed what is now called the Turing test (a way of establishing whether a machine is "intelligent") in 1950. Psychologists, philosophers, and computer scientists have been wrestling with definitions of "intelligence" ever since. But that hasn't necessarily got them any nearer to developing an intelligent machine.

"No cognition. Just sensing and action. This is all I would build, and completely leave out... the intelligence of artificial intelligence."

Rodney A. Brooks, Robot: The Future of Flesh and Machines, p.36.

As British robot engineer Kevin Warwick pointed out about 20 years ago in his book March of the Machines, "intelligence" is an inherently human concept. Just because there's a lofty view from where people happen to be standing, it doesn't follow that there aren't better views from elsewhere. Human intelligence tests measure your ability to do well at human intelligence tests—and don't necessarily translate into an ability to do useful things in the real world. Developing computer-controlled machines that humans would regard as intelligent is not really the goal of modern robotics research. The real objective is to produce millions of machines that can work effectively alongside billions of humans, either augmenting our abilities or doing things we simply don't want to do; we don't automatically need "intelligence" for that. According to pragmatic engineers like Warwick, robots should be assessed on their own terms against the specific tasks they're designed for, not according to some fuzzy, human concept of "intelligence" designed to flatter human self-esteem. Is a robot intelligent? Who cares if it does the job we need it to do, maybe better than a person would.

Emotional intelligence

Emo emotional robot with a computer-controlled mouth and eyes

Photo: Robots are designed with friendly faces so humans don't feel threatened when they work alongside them. This one is called Emo and it lives at Think Tank, the science museum in Birmingham, England. Its digital-cameras eyes help it to learn and recognize human expressions, while the rubber-tube lips allow it to smile and make expressions of its own.

Whether they're deemed intelligent or not, computers and robots are quintessentially logical and rational where humans are more emotional and inconsistent. Developing robots that are emotional—particularly ones that can sense and respond to human emotions—is arguably much more important than making intelligent machines. Would you rather your coworkers were cold, logical, hyperintelligent beings who could solve every problem and never make a mistake? Or friendly, easy-going, pleasant to pass time with, and fallibly human? Most people would probably chose the latter, simply because it makes for more effective teamwork—and that's how most of us generally get things done. So developing a likeable robot that has the ability to listen, smile, tell jokes round the water cooler, and sympathize when your life takes a turn for the worse is arguably just as important as making one that's clever. Indeed, one of the main reasons for developing humanoid robots is not to replicate human emotions but to make machines that people don't feel scared or threatened by—and building robots that can make eye contact, chuckle, or smile is a very effective way to do that.

Emotion is often in the eye of the beholder—especially when it comes to humans and machines. When people look at cars, they tend to see faces (two headlights for eyes, a radiator grille for a mouth) or link particular emotions with certain colours of paintwork (a red car is racy, a black one is dark and mysterious, a silver one is elegant and professional). In much the same way, people project feelings onto robots simply because of how they look or move: the robot has no emotions; the emotions it conjurs up are entirely in your mind. One of the world's leading robot engineers, MIT's Rodney Brooks, tells a story of how he was involved in the development of a robotic baby toy so lifelike that it provoked sincere feelings of attachment in the adults and children who looked after it. Kismet, an "emotional robot" developed in the late 1990s by Cynthia Breazeal, one of his students, listens, coos, and pays attention to humans in a startlingly babylike way—to the extent that people grow very attached to it, as a parent to a child. Again, the robot has nothing like human emotions; it simply provokes an authentic emotional reaction in humans and we interpret our own feelings as though the robot were emotional too. In other words, we might redefine the problem of developing emotional robots as making machines that humans really care about.

Action (doing)

How a robot moves and responds to the world is the most important thing about it. Intelligent machines that sense and think but don't move or respond hardly qualify as robots; they're really just computers. Action is a much more complex problem than it might seem, both in humans and machines. In humans, the sheer number of muscles, tendons, bones, and nerves in our limbs make coordinatred, accurate body control a logistical nightmare. There's nothing easier than lifting your hand to scratch your nose—your brain makes it seem to easy—but if we try to replicate this sort of behavior in a machine, we instantly realize how difficult it is. That's one reason why, until relatively recently, virtually all robots moved around on wheels rather than fully articulated human legs (wheels are generally faster and more reliable, but hopeless at managing rough terrain or stairs).

Hexapod robot with six spinning legs moves across rough ground.

Photo: Robotic Hexapod uses six spinning legs to negotiate rough, rocky terrain that wheels would struggle to cross. It can also use its legs to swim! Photo by Robbin Cresswell courtesy of US Air Force.

Just because a robot has to move, it doesn't follow that it has to move like a person. Factory robots are designed around giant electric, hydraulic, or pneumatic arms fitted with various tools geared to specific jobs, like painting, welding, or laser-cutting fabric. No human can swivel their wrist through 360 degrees, but factory robots can; there's simply no good reason to be bound by human limitations. Indeed, there's no reason why robots have to act (move) like humans at all. Virtually every other animal you can think of, from salamanders and sharks to snakes and turkeys, has been replicated in robot form: it often makes much more sense for robots to scuttle round like animals than prance about like people. By the same token, making "emotional robots" (ones to which people feel emotions) doesn't necessarily have to mean building humanoids. That explains the instant success of Sony's robotic AIBO dogs, launched in 1999. They were essentially robotic pets onto which people projected their need for companionship.

Military Big Dog robot running through a field alongside a soldier.

Photo: Robots don't have to look or work like humans. This is BigDog, the infamous robotic "pack-mule" designed for the US military by Boston Dynamics. Where most robots are electrically powered, this one is driven by four hydraulic legs powered by a small internal combustion engine from a go-kart. In theory, that gives it a big advantage over robots powered by batteries (it should be able to go much further); in practice, its official range is just 32km (20 miles). Photo by Kyle J. O. Olson courtesy of US Marine Corps [Via the Wayback Machine].

Human perception and cognition are hard things for robots to emulate, partly because it's easy to get bogged down in abstract and theoretical arguments about what these terms actually mean. Action is a much simpler problem: movement is movement—we don't have to worry about defining it, the same way we worry over "intelligence," for example. Ironically, though we admire the remarkable grace of a ballet dancer, the leaps and bounds of a world-class athlete, or the painstaking care of a traditional craftsman, we take it for granted that robots will be able to zing about or make things for us with even greater precision. How do they manage it? Some use hydraulics. Most, however, rely on relatively simple, much more afforable electric stepper motors and servo motors, which let a robot spin its wheels or swing its limbs with pinpoint control. Unlike humans, which get tired and make mistakes, robot moves are reliably repeatable; robots get it right every time.

What are robots actually like?

Real-world robots fall into two broad categories. Most are task-specific robots, designed to do one job and repeat it over and over again. Hardly any are general-purpose robots capable of doing a wide variety of jobs (in the way that humans are general-purpose flesh-and-blood machines). Indeed, those multi-purpose robots are still pretty much confined to robotics labs.

Robot arms

Riveting and welding, swinging and sparking—most of the world's robots are high-powered arms, like the ones you see in car factories. Although they became popular in the 1970s, they were invented in the 1950s and first widely deployed in the 1960s by companies such as General Motors. The original robot arm, Unimate, made its debut on the Johnny Carson show back in 1966. Modern robot arms have more degrees of freedom (they can be turned or rotated in more ways) and can be controlled much more precisely.

Industrial robot arm welding a Jaguar car

Photo: It might never have occurred to you that a robot built the car you're driving today. This Jaguar assembly robot (a Kawasaki ZX165U) is a demonstration model at Think Tank, the Birmingham science museum. It can lift loads of up to 300kg and reach up to 3.5m (11.5ft)—quite a bit more than a human arm!

Whether robot arms really qualify as robots is a moot point. Many of them lack much in the way of perception or cognition; they're simply machines that repeat preprogrammed actions. Fast, strong, powerful, and dangerous, they're usually fenced off in safety cages and seldom work anywhere never people (a recent article in the New York Times noted that 33 people have been killed by robots in the United States during the last 30 years). A few years ago, Rodney Brooks reinvented the whole idea of the robot arm with an affordable ($25,000), easy-to-use, user-friendly industrial robot called Baxter, which evolved into a similar machine named Sawyer. It can be "trained" (Brooks avoids the word "reprogrammed") simply by moving its limbs, and it has enough onboard sensory perception and cognition to work safely alongside humans, sharing (for example) exactly the same assembly line.

Robot playing drums

Photo: Robot arms are versatile, precise, and—unlike human factory workers—don't need rest, sleep, or holiday. But "all work and no play..." So this one is learning to play drums for a change, at Think Tank, the Birmingham science museum.

Remote-controlled (teleoperated) machines

Some of the machines we think of as robots are nothing of the kind: they merely appear robotic (and intelligent) because humans are controlling them remotely. Bomb disposal robots work this way: they're simply robot trucks with cameras and manipulator arms operated by joysticks. Until recently, space-exploration robots were designed much the same way, though autonomous rovers (with enough onboard cognition to control themselves) are now commonplace. So while 1997's Mars Sojourner (from the Pathfinder Mission) was semi-autonomous and largely remote-controlled from Earth, the much bigger and newer Mars Spirit and Opportunity rovers (launched in 2003) are far more autonomous.

Orange bomb disposal robot EODMU8 carrying a suspect bomb to safety

Photo: Bomb-disposal robots are almost always remote-controlled. This one, Explosive Ordnance Disposal Mobile Unit (EODMU) 8, can pick up suspect devices with its jaw and carry them to safety. Photo by Joe Ebalo courtesy of US Navy [Via Wayback Machine].

Semi-autonomous household robots

If you've got a robot in your home, most likely it's a robot vacuum cleaner or lawn mower. Although these machines give the impression that they're autonomous and semi-intelligent, they're much simpler (and less robotic) than they appear. When you switch on a Roomba, it doesn't have any idea about the room it's cleaning—how big it is, how dirty it is, or the layout of the furniture. And, unlike a human, it doesn't attempt to build itself a mental model of the room as it's going along. It simply bounces off things randomly and repeatedly, working on the (correct) assumption that if it does this for long enough, the room will be fairly clean in the end. There are a few extra little tweaks, including a spiraling, on-the-spot cleaning mode that kicks in when a "dirt detect" sensor finds concentrated debris, and the ability to follow edges. But essentially, a Roomba cleans at random. Robot lawn mowers work in a somewhat similar way (sometimes with a tether to stop them straying too far).

NASA FIDO robot undergoing field tests to simulate driving on Mars.

Photo: NASA's FIDO was one of its first semi-autonomous robot rovers. Onboard cameras allow space scientists to control it remotely from Earth. Photo courtesy of NASA JPL Planetary Robotics Laboratory and NASA on the Commons.

General-purpose robots

Although advanced robots like Baxter can be trained to do many different things, they're still essentially single-domain machines. Whether they're picking out badly formed machine parts for quality control or shifting boxes from one place to another, they're designed only to work on factory floors. We still don't have a robot that can make the breakfast, take the kids to school, drive itself to work somewhere else, come back home again, clean the house, cook the dinner, and put itself on recharge—unless you count your husband or wife.

Back in the 1990s, when Kevin Warwick wrote his bestselling book March of the Machines, building intelligent, autonomous, general-purpose robots was considered an overly ambitious research goal. Engineers like Warwick typified a "hands-on" alternative approach to robotics, where grand plans were put aside and robots simply evolved as their creators figured out better ways of building robots with more advanced perception, cognition, and action. It's more like robot evolution, working from the bottom up to develop increasingly advanced creatures, than any sort of top-down approach that might be conceived by a kind of robot-world equivalent of God.

Roll time forwards, however, and much has changed. Although engineers like Kevin Warwick and Rodney Brooks are still champions of the pragmatic, bottom-up, minimal-cognition approach, elsewhere, general-purpose autonomous robots are making great strides forward—often literally, as well as metaphorically. The US Defense Department's research wing, DARPA, has sponsored competitions to develop humanoid robots that can cope with a variety of tricky emergency situations, such as rescuing people from natural disasters. (DARPA claims the intention is humanitarian, but similar technology seems certain to be used in robotic soldiers.) Thanks to video sites such as YouTube, robots like these, which would once have been top-secret, have been "growing up in public"—with each new incarnation of the stair stomping, chair balancing, car driving robots instantly going viral on social media.

Self-driving cars

A black self-driving Lincoln MKZ car with onboard lidar and other scanners.

Photo: A cutting-edge, self-driving Lincoln MKZ packed with sensors, including roof-mounted lidar, GPS, and radar. Photo by Jake McClung courtesy of US Marine Corps and DVIDS.

Self-driving cars are a different flavor of general-purpose, autonomous robot. But they've yet to catch the public's imagination in quite the same way, perhaps because they've been developed more quietly, even secretly, by companies such as Google. Now you could argue that there's nothing remotely general-purpose about driving a car: it involves a robot operating successfully in a single domain (the highway) in just the same way as a Baxter (on the factory floor) or a Roomba (cleaning your home). But the sheer complexity of driving—even humans take years to properly master it—makes it, arguably, as much of a general-purpose challenge as the one the DARPA robots are facing. Think of all the different things you have to learn as a driver: starting off, stopping at a signal, turning a corner, overtaking, parallel parking, slowing down when the car in front indicates, emergency stops... to say nothing of driving at daytime or night, in all kinds of weather, on every kind of country road and superhighway. Maybe it would be easier just to stick a humanoid robot in the driving seat after all.

Our robot future

There's no unknowing the things we learn. Technologies cannot be invented. The march of the robots is unstoppable—but quite where they're marching to, no-one yet knows. Futurologists like Raymond Kurzweil believe humans and machines will merge after we reach a point called the singularity, where vastly powerful machines become more intelligent than people. Humans will download their minds to computers and zoom into the future, not in the "bodiless exultation" of cyberspace (as William Gibson once put it) but in a steel and plastic doppelganger: a machine-body powered by the immortal essence of a human mind.

More pragmatic, less dramatic scientists such as Rodney Brooks see a quieter form of evolution where the last few decades of robotic technology begin to augment what millions of years of natural selection have already cobbled together. Brooks argues that we've been on this path for years, with advanced prosthetic limbs, heart pacemakers, cochlear implants for deaf people, robot "exoskeletons" that paralyzed people can slip over their bodies to help them walk again, and (before much longer) widely available artificial retinas for the blind. There will be no revolutionary jump from human to robot but a smarter, smoother transition from flesh machines to hybrids that are part human and part robot. Will robots take over from people? Not according to Brooks: "Because there won't be any us (people) for them (pure robots) to take over from... We (the robot people) will be a step ahead of them (the pure robots). We won't have to worry about them taking over."

A brief history of robots

Grey NASA Unimate/PUMA robot arm

Photo: PUMA is one of the world's best-known robot arms, developed from Vic Scheinman's Vicarm in 1978. Photo courtesy of NASA Ames Research Center.

When robots look back on their lives, what milestones spring to their computerized minds? Here are some of the key moments in the long and continuing history of robotkind!

Sponsored links

Find out more

On this website

On other websites

Books

For older readers

Hobbyist/practical books

For younger readers

Articles

Videos

The best way to learn about cutting-edge robots is to watch them in action. So here's a small collection of 10 short videos I've compiled from YouTube (and elsewhere) that illustrate the past, present, and future of robotics. As you watch these films, try to imagine the engineering challenges the robot designers have had to solve in each case:

Please do NOT copy our articles onto blogs and other websites

Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties.

Text copyright © Chris Woodford 2007, 2019. All rights reserved. Full copyright notice and terms of use.

Follow us

Rate this page

Please rate or give feedback on this page and I will make a donation to WaterAid.

Tell your friends

If you've enjoyed this website, please kindly tell your friends about us on your favorite social sites.

Press CTRL + D to bookmark this page for later, or email the link to a friend.

Cite this page

Woodford, Chris. (2007/2019) Robots. Retrieved from https://www.explainthatstuff.com/robots.html. [Accessed (Insert date here)]

More to explore on our website...

Back to top