What is a computer?
Computer, electronic device that can receive a set of instructions, or program, and then carry out this program by performing calculations on numerical data or by compiling and correlating other forms of information. The modern world of high technology could not have come about except for the development of the computer. Different types and sizes of computers find uses throughout society in the storage and handling of data, from secret governmental files to banking transactions to private household accounts.
Computers have opened up a new era in manufacturing through the techniques of automation, and they have enhanced modern communication systems. They are essential tools in almost every field of research and applied technology, from constructing models of the universe to producing tomorrow's weather reports, and their use has in itself opened up new areas of conjecture. Database services and computer networks make available a great variety of information sources. The same advanced techniques also make possible invasions of privacy and of restricted information sources, but computer crime has become one of the many risks that society must face if it would enjoy the benefits of modern technology.
Types of Computers
Two main types of computers are in use today, analog and digital, although the term computer is often used to mean only the digital type. Analog computers exploit the mathematical similarity between physical interrelationships in certain problems, and employ electronic or hydraulic circuits (see FLUIDICS) to simulate the physical problem. Digital computers solve problems by performing sums and by dealing with each number digit by digit. Installations that contain elements of both digital and analog computers are called hybrid computers. They are usually used for problems in which large numbers of complex equations, known as time integrals, are to be computed. Data in analog form can also be fed into a digital computer by means of an analog- to-digital converter, and the same is true of the reverse situation (see DIGITAL- TO-ANALOG CONVERTER).
The analog computer is an electronic or hydraulic device that is designed to handle input in terms of, for example, voltage levels or hydraulic pressures, rather than numerical data. The simplest analog calculating device is the slide rule, which employs lengths of specially calibrated scales to facilitate multiplication, division, and other functions. In a typical electronic analog computer, the inputs are converted into voltages that may be added or multiplied using specially designed circuit elements. The answers are continuously generated for display or for conversion to another desired form.
Everything that a digital computer does is based on one operation: the ability to determine if a switch, or "gate," is open or closed. That is, the computer can recognize only two states in any of its microscopic circuits: on or off, high voltage or low voltage, or-in the case of numbers-0 or 1. The speed at which the computer performs this simple act, however, is what makes it a marvel of modern technology. Computer speeds are measured in megaHertz, or millions of cycles per second. A computer with a "clock speed" of 10 mHz-a fairly representative speed for a microcomputer-is capable of executing 10 million discrete operations each second. Business microcomputers can perform 15 to 40 million operations per second, and supercomputers used in research and defense applications attain speeds of billions of cycles per second. Digital computer speed and calculating power are further enhanced by the amount of data handled during each cycle. If a computer checks only one switch at a time, that switch can represent only two commands or numbers; thus ON would symbolize one operation or number, and OFF would symbolize another. By checking groups of switches linked as a unit, however, the computer increases the number of operations it can recognize at each cycle. For example, a computer that checks two switches at one time can represent four numbers (0 to 3) or can execute one of four instructions at each cycle, one for each of the following switch patterns: OFF-OFF (0); OFF-ON (1); ON-OFF (2); or ON-ON (3).
The first adding machine, a precursor of the digital computer, was devised in 1642 by the French philosopher Blaise Pascal. This device employed a series of ten-toothed wheels, each tooth representing a digit from 0 to 9. The wheels were connected so that numbers could be added to each other by advancing the wheels by a correct number of teeth. In the 1670s the German philosopher and mathematician Gottfried Wilhelm von Leibniz improved on this machine by devising one that could also multiply. The French inventor Joseph Marie Jacquard , in designing an automatic loom, used thin, perforated wooden boards to control the weaving of complicated designs. During the 1880s the American statistician Herman Hollerith conceived the idea of using perforated cards, similar to Jacquard's boards, for processing data. Employing a system that passed punched cards over electrical contacts, he was able to compile statistical information for the 1890 U.S. census.
The Analytical Engine
Also in the 19th century, the British mathematician and inventor Charles Babbage worked out the principles of the modern digital computer. He conceived a number of machines, such as the Difference Engine, that were designed to handle complicated mathematical problems. Many historians consider Babbage and his associate, the British mathematician Augusta Ada Byron (Lady Lovelace, 1815-52), the daughter of the English poet Lord Byron, the true inventors of the modern digital computer. The technology of their time was not capable of translating their sound concepts into practice; but one of their inventions, the Analytical Engine, had many features of a modern computer. It had an input stream in the form of a deck of punched cards, a "store" for saving data, a "mill" for arithmetic operations, and a printer that made a permanent record.
Analog computers began to be built at the start of the 20th century. Early models calculated by means of rotating shafts and gears. Numerical approximations of equations too difficult to solve in any other way were evaluated with such machines. During both world wars, mechanical and, later, electrical analog computing systems were used as torpedo course predictors in submarines and as bombsight controllers in aircraft. Another system was designed to predict spring floods in the Mississippi River Basin. In the 1940s, Howard Aiken, a Harvard University mathematician, created what is usually considered the first digital computer. This machine was constructed from mechanical adding machine parts. The instruction sequence to be used to solve a problem was fed into the machine on a roll of punched paper tape, rather than being stored in the computer. In 1945, however, a computer with program storage was built, based on the concepts of the Hungarian-American mathematician John von Neumann. The instructions were stored within a so-called memory, freeing the computer from the speed limitations of the paper tape reader during execution and permitting problems to be solved without rewiring the computer.
The rapidly advancing field of electronics led to construction of the first general-purpose all-electronic computer in 1946 at the University of Pennsylvania by the American engineer John Presper Eckert, Jr. and the American physicist John William Mauchly. (Another American physicist, John Vincent Atanasoff, later successfully claimed that certain basic techniques he had developed were used in this computer.) Called ENIAC, for Electronic Numerical Integrator And Computer, the device contained 18,000 vacuum tubes and had a speed of several hundred multiplications per minute. Its program was wired into the processor and had to be manually altered. The use of the transistor in computers in the late 1950s marked the advent of smaller, faster, and more versatile logical elements than were possible with vacuum- tube machines. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines called second-generation computers. Components became smaller, as did intercomponent spacings, and the system became much less expensive to build.
Late in the 1960s the integrated circuit, or IC, was introduced, making it possible for many transistors to be fabricated on one silicon substrate, with inter- connecting wires plated in place. The IC resulted in a further reduction in price, size, and failure rate. The microprocessor became a reality in the mid-1970s with the introduction of the large scale integrated (LSI) circuit and, later, the very large scale integrated (VLSI) circuit, with many thousands of interconnected transistors etched into a single silicon substrate. To return, then, to the "switch-checking" capabilities of a modern computer: computers in the 1970s generally were able to check eight switches at a time. That is, they could check eight binary digits, or bits, of data, at every cycle. A group of eight bits is called a byte, each byte containing 256 possible patterns of ONs and OFFs (or 1's and 0's). Each pattern is the equivalent of an instruction, a part of an instruction, or a particular type of datum, such as a number or a character or a graphics symbol. The pattern 11010010, for example, might be binary data-in this case, the decimal number 210 (see NUMBER SYSTEMS)-or it might tell the computer to compare data stored in its switches to data stored in a certain memory-chip location. The development of processors that can handle 16, 32, and 64 bits of data at a time has increased the speed of computers. The complete collection of recognizable patterns-the total list of operations-of which a computer is capable is called its instruction set. Both factors-number of bits at a time, and size of instruction sets-continue to increase with the ongoing development of modern digital computers.
Modern digital computers are all conceptually similar, regardless of size. Nevertheless, they can be divided into several categories on the basis of cost and performance: the personal computer or microcomputer, a relatively low-cost machine usually of desk-top size (some, called laptops, are small enough to fit in a briefcase); the workstation, a microcomputer with enhanced graphics and communications capabilities that make it especially useful for office work; the minicomputer, an appliance-sized computer, generally too expensive for personal use, with capabilities suited to a business, school, or laboratory; and the mainframe computer, a large expensive machine with the capability of serving the needs of major business enterprises, government departments, scientific research establishments, or the like (the largest and fastest of these are called supercomputers). A digital computer is not actually a single machine, in the sense that most people think of computers. Instead it is a system composed of five distinct elements: (1) a central processing unit; (2) input devices; (3) memory storage devices; (4) output devices; and (5) a communications network, called a "bus," that links all the elements of the system and connects the system to the external world.
Central Processing Unit (CPU)
The CPU may be a single chip or a series of chips that perform arithmetic and logical calculations and that time and control the operations of the other elements of the system. Miniaturization and integration techniques made possible the development of a CPU chip called a microprocessor, which incorporates additional circuitry and memory. The result is smaller computers and reduced support circuitry. Microprocessors are used in most of today's personal computers. Most CPU chips and microprocessors are composed of four functional sections: (1) an arithmetic/logic unit; (2) registers; (3) a control section; and (4) an internal bus. The arithmetic/logic unit gives the chip its calculating ability and permits arithmetical and logical operations. The registers are temporary storage areas that hold data, keep track of instructions, and hold the location and results of these operations. The control section has three principal duties. It times and regulates the operations of the entire computer system; its instruction decoder reads the patterns of data in a designated register and translates the pattern into an activity, such as adding or comparing; and its interrupt unit indicates the order in which individual operations use the CPU, and regulates the amount of CPU time that each operation may consume. The last segment of a CPU chip or microprocessor is its internal bus, a network of communication lines that connects the internal elements of the processor and also leads to external connectors that link the processor to the other elements of the computer system. The three types of CPU buses are: (1) a control bus consisting of a line that senses input signals and another line that generates control signals from within the CPU; (2) the address bus, a one-way line from the processor that handles the location of data in memory addresses; and (3) the data bus, a two-way transfer line that both reads data from memory and writes new data into memory.
These devices enable a computer user to enter data, commands, and programs into the CPU. The most common input device is the keyboard. Information typed at the typewriter-like keyboard is translated by the computer into recognizable patterns. Other input devices include light pens, which transfer graphics information from electronic pads into the computer; joysticks and mouses, which translate physical motion into motion on a computer video display screen; light scanners, which "read" words or symbols on a printed page and "translate" them into electronic patterns that the computer can manipulate and store; and voice recognition modules, which take spoken words and translate them into digital signals for the computer. Storage devices can also be used to input data into the processing unit.
Computer systems can store data internally (in memory) and externally (on storage devices). Internally, instructions or data can be temporarily stored in silicon RAM (Random Access Memory) chips that are mounted directly on the computer's main circuit board, or in chips mounted on peripheral cards that plug into the computer's main circuit board. These RAM chips consist of up to a million switches that are sensitive to changes in electric current. So-called static RAM chips hold their bits of data as long as current flows through the circuit, whereas dynamic RAM (DRAM) chips need high or low voltages applied at regular intervals-every two milliseconds or so-if they are not to lose their information. Another type of internal memory consists of silicon chips on which all switches are already set. The patterns on these ROM (Read-Only Memory) chips form commands, data, or programs that the computer needs to function correctly. RAM chips are like pieces of paper that can be written on, erased, and used again; ROM chips are like a book, with its words already set on each page. Both RAM and ROM chips are linked by circuitry to the CPU. External storage devices, which may physically reside within the computer's main processing unit, are external to the main circuit board. These devices store data as charges on a magnetically sensitive medium such as an audio tape or, more commonly, on a disk coated with a fine layer of metallic particles. The most common external storage devices are so-called floppy and hard disks, although most large computer systems use banks of magnetic tape storage units. Floppy disks can contain from several hundred thousand bytes to well more than a million bytes of data, depending on the system. Hard, or "fixed," disks cannot be removed from their disk-drive cabinets, which contain the electronics to read and write data onto the magnetic disk surfaces. Hard disks can store from several million bytes to a few hundred million bytes. CD-ROM technology, which uses the same laser techniques that are used to create audio compact disks (CDs), promises storage capacities in the range of several gigabytes (billion bytes) of data. As a comparison, all 28 text volumes of this encyclopedia would fill only about one fourth of a single standard-size CD-ROM disk.
These devices enable the user to see the results of the computer's calculations or data manipulations. The most common output device is the video display terminal (VDT), a monitor that displays characters and graphics on a television-like screen. A VDT usually has a cathode-ray tube (CRT) like an ordinary television set, but small, portable computers may use liquid crystal displays (LCD) or electroluminescent screens. Other standard output devices include printers and modems. A modem links two or more computers by translating digital signals into analog signals so that data can be transmitted via telecommunications.
Different types of peripheral devices-disk drives, printers, communications networks, and so on-handle and store data differently from the way the computer handles and stores it. Internal operating systems, usually stored in ROM memory, were developed primarily to coordinate and translate data flows from dissimilar sources, such as disk drives or co-processors (processing chips that perform simultaneous but different operations from the central unit). An operating system is a master control program, permanently stored in memory, that interprets user commands requesting various kinds of services, such as display, print, or copy a data file; list all files in a directory; or execute a particular program.
A program is a sequence of instructions that tells the hardware of a computer what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. In some specialized, or "dedicated," computers the operating instructions are embedded in their circuitry; common examples are the microcomputers found in calculators, wristwatches, automobile engines, and microwave ovens. A general-purpose computer, on the other hand, contains some built-in programs (in ROM) or instructions (in the processor chip), but it depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. Software in widespread use includes a wide range of applications programs-instructions to the computer on how to perform various tasks.
A computer must be given instructions in a "language" that it understands-that is, a particular pattern of binary digital information. On the earliest computers, programming was a difficult, laborious task, because vacuum-tube ON-OFF switches had to be set by hand. Teams of programmers often took days to program simple tasks such as sorting a list of names. Since that time a number of computer languages have been devised, some with particular kinds of functioning in mind and others aimed more at ease of use-the "user- friendly" approach.
Unfortunately, the computer's own binary-based language, or machine language, is difficult for humans to use. The programmer must input every command and all data in binary form, and a basic operation such as comparing the contents of a register to the data in a memory-chip location might look like this: 11001010 00010111 11110101 00101011. Machine-language programming is such a tedious, time-consuming, task that the time saved in running the program rarely justifies the days or weeks needed to write the program.
One method programmers devised to shorten and simplify the process is called "assembly- language" programming. By assigning a short (usually three-letter) mnemonic code to each machine-language command, assembly-language programs could be written and "debugged"-cleaned of logic and data errors-in a fraction of the time needed by machine-language programmers. In assembly language, each mnemonic command and its symbolic operands equals one machine instruction. An "assembler" program translates the mnemonic "opcodes" (operation codes) and symbolic operands into binary language and executes the program. Assembly language, however, can be used only with one type of CPU chip or microprocessor. Programmers who expended much time and effort to learn how to program one computer had to learn a new programming style each time they worked on another machine. What was needed was a shorthand method by which one symbolic statement could represent a sequence of many machine-language instructions, and a way that would allow the same program to run on several types of machines. These needs led to the development of so- called high-level languages.
High- level languages often use English-like words-for example, LIST, PRINT, OPEN, and so on-as commands that might stand for a sequence of tens or hundreds of machine-language instructions. The commands are entered from the keyboard or from a program in memory or in a storage device, and they are intercepted by a program that translates them into machine-language instructions. Translator programs are of two kinds: interpreters and compilers. With an interpreter, programs that "loop" back to re-execute part of their instructions reinterpret the same instruction each time it appears, so interpreted programs run much more slowly than machine-language programs. Compilers, by contrast, translate an entire program into machine language prior to execution, so such programs run as rapidly as though they were written directly in machine language. American computer scientist Grace Hopper is credited with implementing the first commercially-oriented computer language. After programming an experimental computer at Harvard University, she worked on the UNIVAC I and II computers and developed a commercially usable high-level programming language called FLOW-MATIC. To facilitate computer use in scientific applications, IBM then developed a language that would simplify work involving complicated mathematical formulas. Begun in 1954 and completed in 1957, FORTRAN (FORmula TRANslator) was the first comprehensive high-level programming language that was widely used. In 1957, the Association for Computing Machinery set out to develop a universal language that would correct some of FORTRAN's perceived faults. A year later they released ALGOL (ALGOrithmic Language), another scientifically oriented language; widely used in Europe in the 1960s and 1970s, it has since been superseded by newer languages, while FORTRAN continues to be used because of the huge investment in existing programs. COBOL (COmmon Business Oriented Language), a commercial and business programming language, concentrated on data organization and file handling and is widely used today in business. BASIC (Beginner's All-purpose Symbolic Instruction Code) was developed at Dartmouth College in the early 1960s for use by nonprofessional computer users. The language came into almost universal use with the microcomputer explosion of the 1970s and 1980s. Condemned as slow, inefficient, and inelegant by its detractors, BASIC is nevertheless simple to learn and easy to use. Because many early microcomputers were sold with BASIC built into the hardware (in ROM memory) the language rapidly came into widespread use. As a very simple example of a BASIC program, consider the addition of the numbers 1 and 2, and the display of the result. This is written as follows (the numerals 10-40 are line numbers):
Although hundreds of different computer languages and variants exist, several others deserve mention. PASCAL, originally designed as a teaching tool, is now one of the most popular microcomputer languages. LOGO was developed to introduce children to computers. C, a language Bell Laboratories designed in the 1970s, is widely used in developing systems programs, such as language translators. LISP and PROLOG are widely used in artificial intelligence.
One ongoing trend in computer development is microminiaturization, the effort to compress more circuit elements into smaller and smaller chip space. Researchers are also trying to speed up circuitry functions through the use of superconductivity, the phenomenon of decreased electrical resistance observed as objects exposed to very low temperatures become increasingly colder. The "fifth-generation" computer effort to develop computers that can solve complex problems in what might eventually be called creative ways is another trend in computer development, the ideal goal being true artificial intelligence. One path actively being explored is parallel-processing computer, which uses many chips to perform several different tasks at the same time. Parallel processing may eventually be able to duplicate to some degree the complex feedback, approximating, and assessing functions of human thought. Another ongoing trend is the increase in computer networking, which now also employs statellites to link computers globally.