A computer is a device that performs calculations on input data to produce output. The digital computer is the type in common use today. However, there is a much older type – the analog computer. Mechanical analog computers have been used since ancient times. The slide rule (invented in the 1600s) is one example. Electronic analog computers are still fairly widely used today, but mainly for very specific tasks (eg control systems).
In an analog computer, processing is done using continuous, real numbers instead of discrete, digital numbers. Real numbers can represent any value to the precision allowed by the physical device’s tolerances. Thus an analog computer can work on calculus problems directly. While a digital computer requires many simple transistors (each holding a 1 or 0) to store one discrete value, an analog computer can use a single capacitor to store one continuous (real) value. Analog computers can provide good models for the physical world. The mathematics governing masses, springs, fluidic flow, etc can be directly applied to an electronic circuit based on operational amplifiers. Also, all processing is done in parallel as opposed to the sequential nature of digital computers. Output varies with input in nearly ‘real time’, thus making analog computers useful in many control systems.
During World War II, analog computers grew in complexity and power as controllers for weapons systems. However, the advent of the modern electronic digital computer after the war largely led to their obsolescence. Digital computers offer great advantages. Miniaturization allows millions of simple binary transistors to be placed on a single chip. Digital numbers can easily separate mantissa and exponent (scientific notation), creating virtually unlimited dynamic range. Digital numbers are largely immune to noise or signal loss.