Have you ever heard of a 1940s computer
called the ENIAC? It was about the same length and weight as three to four double-decker buses and
contained 18,000 buzzing electronic switches known as vacuum tubes.
Despite its gargantuan size, it was thousands
of times less powerful than a modern laptop—a machine about 100 times smaller.
If the history of computing sounds like a magic trick—squeezing more and more power
into less and less space—it is! What made it
possible was the invention of the integrated
circuit (IC) in 1958. It's a neat way of cramming hundreds, thousands, millions, or
even billions of electronic components
onto tiny chips of silicon no
bigger than a fingernail. Let's take a closer look at ICs and how
Photo: An integrated circuit from the outside. This one comes in a convenient form called a dual-inline package (DIP), which consists of a black plastic or
ceramic outer case with metal pins down each side for plugging into a bigger electronic circuit board (the brown thing you can see in the background). The actual circuit that does the work is a tiny little chip embedded inside the DIP; you can see how it's connected to the DIP's outer pins in the next photo.
Photo: An integrated circuit from the inside. If you could lift the cover off a typical microchip like the one in the top photo (and you can't very easily—believe me, I've tried!), this is what you'd find inside. The integrated circuit is the tiny square in the center. Connections run out from it to the terminals (metal pins or legs) around the edge. When you hook up something to one of these terminals, you're actually connecting into the circuit itself. You can just about see the pattern of electronic components on the surface of the chip itself. Photo by courtesy of NASA Glenn Research Center (NASA-GRC).
Open up a television or a radio and you'll see it's built around a
printed circuit board (PCB): a bit like an electric street-map with
components (such as resistors and capacitors)
in place of
the buildings and printed copper connections
linking them together
like miniature metal streets. Circuit boards are fine in small
appliances like this, but if you try to use the same technique to
build a complex electronic machine, such as a computer, you quickly
hit a snag. Even the simplest computer needs eight electronic
switches to store a single byte (character) of information. So if you
want to build a computer with just enough memory to store this
paragraph, you're looking at about 750 characters times 8 or about
6000 switches—for a single paragraph! If
you plump for switches like they had in the ENIAC—vacuum tubes about the size of
an adult thumb—you soon end up with a whopping great big,
power-hungry machine that needs its own mini electricity
plant to keep it running.
When three American physicists invented transistors in 1947, things
improved somewhat. Transistors were a fraction the size of vacuum tubes and relays
(the electromagnetic switches that had started to replace vacuum tubes in
the mid-1940s), used much less power, and were far more reliable. But
there was still the problem of linking all those transistors
together in complex circuits. Even after transistors were invented,
computers were still a tangled mass of wires.
Photo: Integrated circuits fit into printed circuit boards (PCBs) like the green one you can see here. Notice the thin tracks linking the "legs" (terminals) of two different ICs together. Other tracks link the ICs to conventional electronic components such as resistors and capacitors. You can think of the tracks as "streets" making paths between "buildings" where useful things are done (the components themselves). There's also a miniaturized version of a circuit board inside an integrated circuit: the tracks are created in microscopic form on the surface of a silicon wafer.
Integrated circuits changed all that. The basic idea was to take a
complete circuit, with all its many components and the connections between
them, and recreate the whole thing in microscopically tiny form on the surface of a
piece of silicon. It was an amazingly clever idea and it's made
possible all kinds of "microelectronic" gadgets we now take for
granted, from digital watches and
pocket calculators to Moon-landing
rockets and missiles with built-in satellite navigation.
Integrated circuits revolutionized electronics and computing during the 1960s and 1970s. First,
engineers were putting dozens of components on a chip in what was called Small-Scale Integration (SSI).
Medium-Scale Integration (MSI) soon followed, with hundreds of components in an area the same size.
Predictably, around 1970, Large-Scale Integration (LSI) brought thousands of components, Very-Large-Scale Integration (VLSI)
gave us tens of thousands, and Ultra Large Scale (ULSI) millions—and all on chips no bigger than they'd
been before. In 1965, Gordon Moore of the Intel Company, a leading chip maker, noticed that the number of components
on a chip was doubling roughly every one to two years. Moore's Law, as this is known, has continued to hold
ever since. Interviewed by The New York Times 50 years later, in 2015, Moore revealed his astonishment that the law has continued to hold: "The original prediction was to look at 10 years, which I thought was a stretch. This was going from about 60 elements on an integrated circuit to 60,000—a thousandfold extrapolation over 10 years. I thought that was pretty wild. The fact that something similar is going on for 50 years is truly amazing."
Chart: Moore's Law: The number of transistors packed into microchips has roughly doubled every year or two for the last five decades—in other words, it's grown exponentially. If you plot the number of transistors (y-axis) against the year of launch (x-axis) for some common microchips from the last few decades (yellow stars), you'll get an exponential curve; plotting the logarithm instead, you'll get this straight line. Please note that the vertical (y) axis of this chart is logarithmic
and (due to the OpenOffice graphing software I used) the horizontal (x) axis is only vaguely linear.
Source: Plotted using data from Transistor Count, Wikipedia, checked against other sources.
How are integrated circuits made?
Photo: Integrated circuits are manufactured in scrupulously clean conditions;
workers have to wear "bunny suits" like this to stop them contaminating the chips they're making.
This is Intel's wafer fabrication plant in Chandler, Arizona, USA.
Photo courtesy of Carol M. Highsmith Archive,
Library of Congress, Prints and Photographs Division.
How do we make something like a memory or processor chip for a computer? It all starts with a raw chemical element such as silicon, which is chemically treated or doped to make it have different electrical properties...
If you've read our articles on diodes and
transistors, you'll be
familiar with the idea of semiconductors.
Traditionally, people thought of materials fitting into two neat categories: those
that allow electricity to flow through
them quite readily (conductors) and those that don't (insulators).
Metals make up most of the conductors, while nonmetals such as
plastics, wood, and glass
are the insulators. In fact, things are far more complex than this—especially when it comes to certain
elements in the middle of the periodic table (in groups 14 and 15), notably silicon and
germanium. Normally insulators, these elements can be
made to behave more like conductors if we add small quantities of
impurities to them in a process known as doping.
If you add antimony to silicon, you give it slightly more electrons than it
would normally have—and the power to conduct electricity. Silicon
"doped" that way is called n-type. Add boron instead of antimony
and you remove some of silicon's electrons, leaving behind "holes"
that work as "negative electrons," carrying a positive electric
current in the opposite way. That kind of silicon is called p-type.
Putting areas of n-type and p-type silicon side by side creates
junctions where electrons behave in very interesting ways—and that's
how we create electronic, semiconductor-based components like diodes,
transistors, and memories.
Inside a chip plant
Photo: A silicon wafer. Photo by courtesy of NASA Glenn Research Center (NASA-GRC).
The process of making an integrated circuit starts off with a big
single crystal of silicon, shaped like a long solid pipe, which is "salami sliced" into thin discs
(about the dimensions of a compact disc) called wafers.
The wafers are marked out into many identical square or rectangular areas, each
of which will make up a single silicon chip (sometimes called a
microchip). Thousands, millions, or billions of components are then
created on each chip by doping different areas of the surface to turn them into
n-type or p-type silicon. Doping is done by a variety of different
processes. In one of them, known as sputtering,
ions of the doping material are fired at the silicon wafer like bullets from a
gun. Another process called vapor deposition
involves introducing the doping material as a gas and letting it condense so
the impurity atoms create a thin film on the surface of the silicon
wafer. Molecular beam epitaxy
is a much more precise form of deposition.
Of course, making integrated circuits that pack hundreds, millions,
or billions of components onto a fingernail-sized chip of silicon is all
a bit more complex and involved than it sounds. Imagine the havoc
even a speck of dirt could cause when you're working at the
microscopic (or sometimes even the nanoscopic)
scale. That's why semiconductors are made in spotless laboratory environments called
clean rooms, where the air is meticulously
workers have to pass in and out through airlocks wearing all kinds of
How you make a microchip - a quick summary
Although making a chip is very intricate and complex, there are really only six separate steps (some of them are
repeated more than once). Greatly simplified, here's how the process works:
Making wafers: We grow pure silicon crystals into long cylinders and slice them (like salami) into thin wafers, each of which will ultimately be cut up into many chips.
Masking: We heat the wafers to coat them in silicon dioxide and use ultraviolet light (blue) to add a hard, protective layer called photoresist.
Etching: We use a chemical to remove some of the photoresist, making a kind of template pattern showing where we want areas of n-type and p-type silicon.
Doping: We heat the etched wafers with gases containing impurities to make the areas of n-type and p-type silicon. More masking and etching may follow.
Testing: Long metal connection leads run from a computer-controlled testing machine to the terminals on each chip. Any chips that don't work are marked and rejected.
Packaging: All the chips that work OK are cut out of the wafer and packaged into protective lumps of plastic, ready for use in computers and other electronic equipment.
Who invented the integrated circuit?
You've probably read in books that ICs were developed jointly by
Jack Kilby (1923–2005) and
Robert Noyce (1927–1990), as though these two men
happily collaborated on their brilliant invention! In fact, Kilby and Noyce
came up with the idea independently, at more or less exactly the same
time, prompting a furious battle for the rights to the invention that
was anything but happy.
How could two people invent the same thing at exactly the same time? Easy:
integrated circuits were an idea waiting to happen. By the mid-1950s,
the world (and the military, in particular) had discovered the
amazing potential of electronic computers and it was blindingly
apparent to visionaries like Kilby and Noyce that there needed to be
a better way of building and connecting transistors in large
quantities. Kilby was working at Texas Instruments when he came upon
the idea he called the monolithic principle:
trying to build all the different parts of an electronic circuit on a silicon chip.
On September 12, 1958, he hand-built the world's first, crude integrated circuit
using a chip of germanium (a semiconducting element similar to
silicon) and Texas Instruments applied for a patent on the
idea the following year.
Meanwhile, at another company called Fairchild Semiconductor (formed by
a small group of associates who had originally worked for the transistor
pioneer William Shockley) the equally brilliant Robert Noyce was experimenting with miniature
circuits of his own. In 1959, he used a series of photographic
and chemical techniques known as the planar process
(which had just been developed by a colleague, Jean Hoerni)
to produce the first, practical, integrated circuit, a method that Fairchild then tried to
Artwork: Snap! Two great electrical engineers, Jack Kilby and Robert Noyce, came up with the same idea at almost exactly the same time in 1959. Although Kilby filed his patent first, Noyce's patent was granted earlier. Here are drawings from their original patent applications. You can see that we have essentially the same idea in both, with electronic components formed from junctions between layers of p-type (blue) and n-type (red) semiconductors. Connections to the p-type and n-type regions are shown in orange and yellow and the base layers (substrates) are shown in green. Artworks courtesy of US Patent and Trademark Office with our own added coloring to improve clarity and highlight the similarities. You can find links to the patents themselves in the references down below.
There was considerable overlap between the two men's work and Texas
Instruments and Fairchild battled in the courts for much of the 1960s over who
had really developed the integrated circuit. Finally, in 1969, the
companies agreed to share the idea.
Kilby and Noyce are now rightly regarded as joint-inventors of
arguably the most important and far-reaching technology developed in the 20th
century. Both men were inducted into the National Inventors Hall of
Fame (Kilby in 1982, Noyce the following year) and Kilby's
breakthrough was also recognized with the award of a half-share in the
Nobel Prize in
Physics in 2000 (as Kilby very generously noted in his acceptance speech,
Noyce would surely have shared in the prize too had he not died of a heart attack a decade earlier).
While Kilby is remembered as a brilliant scientist, Noyce's legacy has an
added dimension. In 1968, he co-founded the Intel Electronics company
with Gordon Moore (1929–), which went on to develop the microprocessor
(single-chip computer) in 1974. With IBM, Microsoft, Apple, and other
pioneering companies, Intel is credited with helping to bring
affordable personal computers to our homes and workplaces. Thanks to Noyce and
Kilby, and brilliant engineers who subsequently built on their work, there are now something like two billion computers in use
throughout the world, many of them incorporated into cellphones,
portable satellite navigation devices, and other electronic gadgets.
Want to learn more about the pioneers? Look at these pages about
Jack Kilby, originally published on the Texas Instruments website
or explore the Intel Museum to find out about Robert Noyce, Gordon Moore, and their colleagues. Both sites have a superb collection of
photos of early integrated circuits.
Is Moore's Law Less Important to the Tech Industry? by Quentin Hardy. The New York Times. July 25, 2014. Changing work habits and the advent of cloud computing are changing people's expectations of their computers, which means Moore's Law is arguably no longer as important as it was.
25 Microchips That Shook the World by Brian Santo. IEEE Spectrum, May 1, 2009. If you think a chip is just a chip, think again. From timer circuits to flash memory and speech synthesizers to microprocessors, this article lists two dozen classic chips that radically changed the history of computing.
From Sand to Silicon: Intel shows you the process of making a microchip, starting in the desert (with the sand that gives us silicon) and ending with a finished chip. Quite a neat video, but some commentary or explanation wouldn't have gone amiss: this video only really makes sense if you already know about all the processes that are being shown to you.
One of the best ways of learning about inventions is to read how the inventors themselves saw and presented their own ideas; patents
offer a great way to do that. For people who want a more detailed insight, here a couple of key patents by Kilby and Noyce that are worth looking at:
US Patent 3,115,581: Miniature semiconductor integrated circuit by Jack S. Kilby, Texas Instruments, filed May 6, 1959 and issued December 24, 1963. Describes the basic idea of making integrated circuits "using only one material for all circuit elements and a limited number of compatible process steps for the production thereof."
US Patent 2,981,877: Semiconductor device and lead structure by Robert N. Noyce, Fairchild Semiconductor, filed July 30, 1959 and issued April 25, 1961. Although Noyce applied for this invention over two months after Kilby, the Noyce patent was granted over two years earlier, which helped to fuel the bitter battle between Texas Instruments and Fairchild over who exactly had invented the integrated circuit.
Please do NOT copy our articles onto blogs and other websites
Articles from this website are registered at the US Copyright Office. Copying or otherwise using registered works without permission, removing this or other copyright notices, and/or infringing related rights could make you liable to severe civil or criminal penalties.