FYI logo

The Microchip That Changed the World Turns 50

In 1971, a small start-up launched the Intel 4004, the first general-purpose microprocessor. They had no idea it would be so big.

By Wilson da SilvaPublished 2 years ago 14 min read
1
Federico Faggin, Ted Hoff and Stanley Mazor & the Intel 4004 [National Inventors Hall of Fame]

THE WORLD CHANGED forever on November 15, 1971. And hardly anyone noticed.

China had just been admitted to the United Nations, Apollo 15 astronauts had driven the first lunar rover on the Moon, Amtrak began intercity passenger services, Pink Floyd dropped their sixth studio album, Meddle, and Stanley Kubrick’s dystopian film, A Clockwork Orange, was released.

Yet on that same morning 50 years ago, a small start-up company in California known as Intel, barely three years old, put out a press release that signaled the dawn of the Digital Age. “Announcing a new era in integrated electronics” it said breathlessly, as press releases often do. And for once, it was an understatement.

The product launched was the Intel 4004, the first general-purpose ‘microprocessor’ — and the world’s first modern computer microchip. It was an innocuous sliver of silicon in a dark metal casing and fat metal electrodes for legs, looking all the world like a headless electronic cockroach. With 2,300 transistors, it could process what must have seemed like an astounding 60,000 instructions a second.

In today’s tech-talk, it was a 4-bit microprocessor with an operating speed of 74,000 cycles, or 0.74 megahertz (MHz) and its clock speed was 92,000 instructions per second. Intel had done what no one had before: invented a design that allowed 2,300 transistors to fit into single chip with five times the speed and twice the density of then existing metal gate technology.

A single Intel 4004 chip cost US$200, or about US$1,355 in today’s money. By comparison, an AMD Ryzen Threadripper 3990X chip made in 2020 for desktops and servers packs 3.8 billion transistors per chip, has a clock speed of 4.35 MHz and runs at 2.36 trillion instructions per second. It costs just US$999.

In the span of two generations, the number of transistors that fit on a microchip has soared 1.65 million times, and the clock speed is 25 million times faster.

Yet, the Intel seemingly modest 4004 chip represented a revolution so powerful it would sweep the world and eventually re-shape it. In 1971, offices were filled with paper: order books and invoice folders, carbon copies and suspender files, forms in triplicate, memos written by hand or on typewriters, and desks populated with rubber stamps and typewriter ribbons.

The chip that began it all: Intel’s 4004 microchip [Thomas Nguyen/Wikipedia]

Armies of busy clerks swarmed around the paper mountains: sorting, marking, stamping, filing. Today, the office is a more contemplative place, broken only by the gentle hum of computers and the tap-tap of soft keyboards; the army of clerks has been replaced by a platoon of skilled ‘information workers’.

Today, that little-known company in Santa Clara, California, is the world’s richest microchip manufacturer, making US$77.87 billion a year, employing more than 110,000 employees, and supplying chips to Lenovo, HP, and Dell.

When the 4004 chip hit the market, IBM was making big, heavy, room-crowding mainframe computers with spinning magnetic tapes and metal desks where operators flicked switches or fed perforated paper punch cards into slots. The arrival of the 4004 made hardly a wave: the reaction from industry to Intel’s innovation was largely “So what?”

Some did take an interest: it became obvious that the complex mathematical calculations performed on slide rules, which had propelled astronauts to the Moon in Apollo 11 only two years earlier, could be performed flawlessly and instantaneously using the silicon microchip. Soon, the first programmable electronic calculators were born, as were electronic cash registers, traffic light controllers that could sense approaching cars, and digital scales. But few really understood the true power of the microchip.

Intel had arrived at the 4004 chip by accident. At the time, logic controllers for machines had to be custom-made and task specific. In 1969, Intel received an order from Japan’s Busicom to manufacture 12 chips for a new line of desktop, spool-paper electric calculators. Intel engineer Ted Hoff had an idea: rather than make 12 separate chips, maybe he could design a single general-purpose chip — a logic device that could retrieve the specifics of each application from a silicon memory. Nine months later, he and fellow engineer Federico Faggin created a working CPU, or central processing unit — the heart of today’s personal computers.

Intel founders Robert Noyce and Gordon Moore with Andrew Grove [Intel]

Both Hoff and Faggin quickly realised the chip’s potential: that single US$200 microprocessor could perform what only two decades before had taken 18,000 vacuum tubes and 85 cubic meters, weighing 30 tonnes, to achieve: perform the tasks of a general-purpose computer. Even in 1969, there were only 30,000 computers in the world, they were expensive and performed calculations cumbersomely.

Hoff tried to convince Intel to buy the chip back from the Japanese, who under the contract owned the intellectual rights. Intel founder Bob Noyce, who in 1959 at Fairchild Semiconductor had co-invented the printed-board integrated circuit (the basis of all solid-state electronics today), also saw its potential, as did fellow founder Gordon Moore. Others in the small company were not so sure: Intel was in the business of building silicon memory — CPUs might get in the way.

Finally, the four men won the doubters over, and in 1971 Intel offered to return Busicom’s US$60,000 investment in exchange for the rights to the chip. Busicom, struggling financially at the time, accepted the offer. The deal hardly made a ripple at the time, even at Intel. Few realised that it may well have been the deal of the century.

After the birth of the Intel 4004, not much happened at first. In the year that followed, Polaroid launched its first instant colour camera, the television series M*A*S*H began airing on CBS, the first female FBI agents were hired, and The Godfather hit the cinemas. But that year also saw Intel launch the 8008 chip — an 8-bit microprocessor with 16,000 bytes, or 16 kB, of memory (four times the 4 kB of the 4004). It could process 300,000 instructions per second — a five-fold improvement on the year before.

It was the first real confirmation of ‘Moore’s Law’: in 1965, a seemingly bold Gordon Moore, before founding Intel, stated that the number of transistors that could be crammed into a chip would probably double every 18 months — generating a huge increase in power and a rapid decrease in cost. He was right, and Moore’s Law continues to drive computing today.

Screenshot of the original Pong video game [Wikimedia]

That same year — 1972 — Atari was founded by Nolan Bushnell, and Pong — the black-and-white, tennis-like computer video game that was world’s first — was released commercially. Also that year, two young and ambitious computer afficionados, Bill Gates and Paul Allen and, seized on the Intel 8008 and developed a traffic flow recording system, creating a company called Traf-O-Data.

By year’s end, things began to speed up; more and more people began to see the potential of the microprocessor. In 1973, Scelbi Computer Consulting launched the Scelbi-8H computer kit, a build-it-yourself computer based on the Intel 8008. It had 1kB of random-access memory (RAM) and sold for US$565. In France, engineers perfected the Micral, the first pre-assembled computer, also based on the Intel 8008 microprocessor.

The term ‘microcomputer’ first appeared in print describing the Micral; although sold in the United States, it failed to take off. In April 1973, Intel launched another chip: the Intel 8080, an 8-bit microprocessor with 6,000 transistors, a throughput of 3 million instructions-per-second and a seemingly awesome 64kB of memory. Radio Electronics magazine ran a feature article showing how to build a ‘microcomputer’ using the chip, a design it dubbed the ‘Mark 8’.

By then, the field was starting to attract a crowd. The Chicago-based radio and communications company Motorola launched its own microprocessor, the 6800, an 8-bit chip with 4,000 transistors and a five-volt power supply. But it took two computer enthusiasts in Albuquerque to really set computing on fire: working under the grandiose name of Micro Instrumentation and Telemetry Systems (MITS), and saddled with debts of US$365,000, Ed Roberts and Forest Mims launched the Altair 8800, a ready-to-assemble computer based on the new Intel chip. Selling for a rock-bottom US$489, it made the cover of Popular Electronics in January 1975 — and set off an avalanche of orders.

After publication, MITS received 400 orders in one afternoon alone, and within three weeks had a US$250,000 backlog of orders. At a time when there were less than 40,000 computers in the world, the Altair — named after a planet in an episode of Star Trek — sold 2,000. It was the first machine to be called a ‘personal computer’, although it was really just a glorified square box with lights and toggle switches. If you wanted a keyboard, monitor or storage using a magnetic tape drive, you had to buy expansion cards separately.

Microsoft founders Paul Allen (left) and Bill Gates at Lakeside School in Seattle, Washington in 1970 [Wikimedia]

Nevertheless, the Altair sparked much interest; Allen and Gates ditched development of Traf-O-Data and approached MITS, offering to write an operating system for the Altair. The duo called themselves ‘Micro-Soft’, a business name they did not formalise until the following year, dropping the hyphen. The operating system they developed, written in a version of the BASIC computer language, was the first piece of PC software ever sold.

What followed was an explosion of development in the application of both Intel and Motorola chips. A rash of small start-ups appeared with new models, and just as often disappeared, their names now long forgotten: Kenback, IMSAI, Sol, Jupiter II, Southwest Technical.

Meanwhile, at established companies like Xerox and Digital Equipment Corp, engineers broke new ground in personal computer design, but management — seeing no application for the machines — scuttled most of the projects. At Digital, work that began in 1972 on the ‘DEC Datacenter’ — what may have well have been the first computer workstation — was halted when management said they could see no value or application for the product.

And they had a point; in 1975, despite all the market activity, personal computers were still the preserve of hobbyists. That all changed in December 1976, when Steve Wozniak and Steve Jobs displayed to a Californian computer enthusiasts’ club a prototype of the Apple II. Their earlier kit model, the Apple I priced at US$666.66, had done roaring trade; enough for the duo to sell their cars and programmable calculators to form — on April Fool’s Day 1976 — Apple Computer. The next model, the ready-to-use Apple II — amazingly complete for the time with colour monitor, sound, and graphics — was launched the following year.

By the end of 1977, the Commodore PET and the Tandy TRS-80 had also entered the fray. Apple developed a floppy disk drive, and then a PC printer. Word processing and games were the major applications. But it was not until 1979, with the launch of an accounting spreadsheet program called VisiCalc for the Apple II, that the world really took notice of personal computers. For the first time, everyone could see what computers might do: instant calculations. Figures altered on a budget spreadsheet, for example, would automatically flow on changes to other parts of the document and update them, cascading figures up or down.

The Apple II with monochrome monitor, game paddles, and cassette deck for memory storage [Wikimedia]

By 1980, the personal computer industry was becoming too big to ignore. IBM, scared into the market, created a crack unit of engineers in August 1980 to develop its own personal computer, a project codenamed ‘Acorn’. With no time to develop a chip, Big Blue went to Intel, which set about configuring its new 8088 chip for IBM. With no time to develop an operating system, IBM also contracted a fledgling Seattle computer company, Microsoft, to write one, impressed by the young Bill Gates and his pioneering work with Paul Allen developing a system for the Altair 8080 years before.

In a piece of bravado that is now a part of computer industry mythology, Gates confidently detailed the idea of DOS (or Disk Operating System) to IBM, who immediately agreed to licence it — before Microsoft even owned it. What Gates had described was essentially QDOS (quick and dirty operating system) which actually belonged to Seattle Computer Products — which Gates had a license to use. After closing the IBM deal, Gates bought rights to QDOS from the troubled Seattle company and renamed it MS-DOS.

When the IBM PC was launched in August 1981, it single-handedly legitimised the personal computer as a serious tool. Relying on a 4.77 MHz Intel 8088 chip, and with only 64 kB RAM memory and a single floppy drive, it went on sale for a pricey US$3,000. And yet it sold like hotcakes. With the marketing muscle of IBM and the stamp of business approval it brought, both Intel and Microsoft were suddenly guaranteed a future.

But it had a knock-on effect too: suddenly, personal computers made by Apple, Commodore, Tandy, and even newcomers like Sinclair gained instant respect. Within a year, others were on the scene, or had crash programs to get in: Compaq, Texas Instruments, Epson, Atari, Amiga, Osborne, Olivetti, Hewlett-Packard, Toshiba, Zenith. Others muscled into the microchip manufacturing stakes, most notably Japan’s NEC Corp, which quickly became the world’s second-largest microchip maker by 1985, and Texas Instruments, which had made the world’s first commercial silicon transistor in 1954 and the first hand-held calculator in 1967. Even Digital, its management now convinced, entered the market in a big way.

By the end of 1981, an estimated 900,000 computers had been shipped worldwide, and Apple became the first PC company to reach US$1 billion in annual sales. Application programs like dBase and Lotus 1–2–3 flooded the market, and at the end of 1982, another 1.4 million personal computers were shipped.

By January 1983, the revolution was so apparent that Time magazine named the personal computer ‘Man of the Year’.

“In the mid-1970s, someone came to me with an idea for what was basically the PC,” Intel chairman Gordon Moore recalls. “The idea was that we could outfit an 8080 processor with a keyboard and a monitor and sell it in the home market. I asked, ‘What’s it good for?’. And the only answer was that a housewife could keep her recipes on it. I personally didn’t see anything useful in it, so we never gave it another thought.”

No-one, it seems — not even the most avid enthusiasts — could foresee the widespread applicability and incredible popularity of the personal computer. Certainly, none would have imagined that Intel’s first modest silicon chip would spawn, 50 years later, a tidal wave of more than 1.3 billion personal computers that would populate offices and homes around the world. Nor predict that, within 30 years, more PCs would be manufactured than cars and or televisions.

In the 1990s, the late Intel chief executive Andrew Grove coined a phrase that has become the unofficial motto of Silicon Valley, the birthplace of the microchip and the nexus of the world’s high technology industry: “Only the paranoid survive.”

He was right. Many a promising company made possible by the power of the microchip were wrecked on the shoals by the unpredictable and fast shifting currents of technology. Even some of the titans of Silicon Valley floundered — manufacturers Compaq, Wang, and Palm; search engine AltaVista; browser Netscape; Internet provider AOL; and social networker MySpace. All are now relics of a bygone era.

Apple’s Steve Jobs launching the second generation iPhone in 2008 [Tom Coates/Wikimedia]

Even today’s industry stalwarts have, on occasion, had to paddle furiously to stay afloat: in 1993, IBM almost gagged on a year-end loss of US$4.96 billion, the highest annual loss for any U.S. company at the time. Intel and Motorola had their anxious years when product lines failed to catch fire. And Apple itself stared at the yawning abyss of extinction in 1997, but was saved by the return of its co-founder Steve Jobs, who convinced Gates to invest US$150 million to save the company. Jobs’s flurry of innovations on his return — iMacs, iPods, iPads, and iPhones — made Apple into the world’s largest technology company by revenue (US$274.5 billion in 2020) and the world’s most valuable company.

The age of quantum computers may be around the corner, but they will not completely replace the classical computers we use today, which rely on the silicon microchip based on the metal–oxide–semiconductor field-effect transistors (MOSFETs) that debuted in 1971. This architecture is now so advanced, and so refined, that the best chips are just 10 nanometres in size: 700 times smaller than a human red blood cell, and about the size of a virus.

Additionally, silicon microchips have unique qualities that may be impossible for quantum computers to attain, such as storing data, since the memory of quantum computers only lasts a few hundred microseconds at most; and only operate at close to absolute zero (-270˚C). But they will take over tasks that classical computers find difficult, such as searching huge amounts of data or modelling interactions between atoms in a chemical reaction.

Whether Moore’s Law — the ability to double the number of transistors on a chip every 18 months — will continue for the next 50 years is debatable: physics does has limits. But engineers keep finding clever solutions, so the silicon microchip may have yet have an enduring future .

In 1996, I spoke with the late Albert Yu, a 30-year veteran at Intel who ran the heart of the company’s business, the Microprocessor Products Group. He’d just finished speaking at an Intel event in Hong Kong to mark the microchip’s 25th anniversary. “None of us thought it was going to be this big,” he told me. “We thought it would be substantial, we thought it had potential. But we had no idea it would go this far.”

Like this story? Click the ♥︎ below, or send me a tip. And thanks 😊

Science
1

About the Creator

Wilson da Silva

Wilson da Silva is a science journalist in Sydney | www.wilsondasilva.com | https://bit.ly/3kIF1SO

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.