The first microprocessor was the Intel 4004, produced in 1971. Originally developed for a calculator, and revolutionary for its time, it contained 2,300 transistors on a 4-bit microprocessor that could perform only 60,000 operations per second. The first 8-bit microprocessor was the Intel 8008, developed in 1972 to run computer terminals. The Intel 8008 contained 3,300 transistors. The first truly general-purpose microprocessor, developed in 1974, was the 8-bit Intel 8080 (see Microprocessor, 8080), which contained 4,500 transistors and could execute 200,000 instructions per second. By 1989, 32-bit microprocessors containing 1.2 million transistors and capable of executing 20 million instructions per second had been introduced.
In the 1990s the number of transistors on microprocessors continued to double nearly every 18 months. The rate of change followed an early prediction made by American semiconductor pioneer Gordon Moore. In 1965 Moore predicted that the number of transistors on a computer chip would double every year, a prediction that has come to be known as Moore’s Law. In the mid-1990s chips included the Intel Pentium Pro, containing 5.5 million transistors; the UltraSparc-II, by Sun Microsystems, containing 5.4 million transistors; the PowerPC620, developed jointly by Apple, IBM, and Motorola, containing 7 million transistors; and the Digital Equipment Corporation's Alpha 21164A, containing 9.3 million transistors. By the end of the decade microprocessors contained many millions of transistors, transferred 64 bits of data at once, and performed billions of instructions per second.
Tidak ada komentar:
Posting Komentar