History of the development of computer technology historical situation. History of the development of computer technology. Generations of computers (computers). Mechanical stage of development of computing technology

As soon as a person discovered the concept of “quantity”, he immediately began to select tools that would optimize and facilitate counting. Today, super-powerful computers, based on the principles of mathematical calculations, process, store and transmit information - the most important resource and engine of human progress. It is not difficult to get an idea of ​​how the development of computer technology took place by briefly considering the main stages of this process.

The main stages of the development of computer technology

The most popular classification proposes to highlight the main stages of the development of computer technology on a chronological basis:

  • Manual stage. It began at the dawn of the human era and continued until the middle of the 17th century. During this period, the basics of counting emerged. Later, with the formation of positional number systems, devices appeared (abacus, abacus, and later a slide rule) that made calculations by digits possible.
  • Mechanical stage. It began in the middle of the 17th century and lasted almost until the end of the 19th century. The level of development of science during this period made it possible to create mechanical devices that perform basic arithmetic operations and automatically remember the highest digits.
  • The electromechanical stage is the shortest of all that unite the history of the development of computer technology. It only lasted about 60 years. This is the period between the invention of the first tabulator in 1887 until 1946, when the very first computer (ENIAC) appeared. New machines, the operation of which was based on an electric drive and an electric relay, made it possible to perform calculations with much greater speed and accuracy, but the counting process still had to be controlled by a person.
  • The electronic stage began in the second half of the last century and continues today. This is the story of six generations of electronic computers - from the very first giant units, which were based on vacuum tubes, to the ultra-powerful modern supercomputers with a huge number of parallel working processors, capable of simultaneously executing many commands.

The stages of development of computer technology are divided according to a chronological principle rather arbitrarily. At a time when some types of computers were in use, the prerequisites for the emergence of the following were actively being created.

The very first counting devices

The earliest counting tool known to the history of the development of computer technology is the ten fingers on human hands. Counting results were initially recorded using fingers, notches on wood and stone, special sticks, and knots.

With the advent of writing, various ways of writing numbers appeared and developed, and positional number systems were invented (decimal in India, sexagesimal in Babylon).

Around the 4th century BC, the ancient Greeks began to count using an abacus. Initially, it was a clay flat tablet with stripes applied to it with a sharp object. Counting was carried out by placing small stones or other small objects on these stripes in a certain order.

In China, in the 4th century AD, a seven-pointed abacus appeared - suanpan (suanpan). Wires or ropes - nine or more - were stretched onto a rectangular wooden frame. Another wire (rope), stretched perpendicular to the others, divided the suanpan into two unequal parts. In the larger compartment, called “earth,” there were five bones strung on wires, in the smaller compartment, called “sky,” there were two of them. Each of the wires corresponded to a decimal place.

Traditional soroban abacus has become popular in Japan since the 16th century, having arrived there from China. At the same time, abacus appeared in Russia.

In the 17th century, based on logarithms discovered by the Scottish mathematician John Napier, the Englishman Edmond Gunter invented the slide rule. This device was constantly improved and has survived to this day. It allows you to multiply and divide numbers, raise to powers, determine logarithms and trigonometric functions.

The slide rule became a device that completed the development of computer technology at the manual (pre-mechanical) stage.

The first mechanical calculating devices

In 1623, the German scientist Wilhelm Schickard created the first mechanical "calculator", which he called a counting clock. The mechanism of this device resembled an ordinary clock, consisting of gears and sprockets. However, this invention became known only in the middle of the last century.

A quantum leap in the field of computing technology was the invention of the Pascalina adding machine in 1642. Its creator, French mathematician Blaise Pascal, began work on this device when he was not even 20 years old. "Pascalina" was a mechanical device in the form of a box with a large number of interconnected gears. The numbers that needed to be added were entered into the machine by turning special wheels.

In 1673, the Saxon mathematician and philosopher Gottfried von Leibniz invented a machine that performed the four basic mathematical operations and could extract the square root. The principle of its operation was based on the binary number system, specially invented by the scientist.

In 1818, the Frenchman Charles (Karl) Xavier Thomas de Colmar, taking Leibniz's ideas as a basis, invented an adding machine that could multiply and divide. And two years later, the Englishman Charles Babbage began constructing a machine that would be capable of performing calculations with an accuracy of 20 decimal places. This project remained unfinished, but in 1830 its author developed another - an analytical engine for performing accurate scientific and technical calculations. The machine was supposed to be controlled by software, and perforated cards with different locations of holes were to be used to input and output information. Babbage's project foresaw the development of electronic computing technology and the problems that could be solved with its help.

It is noteworthy that the fame of the world's first programmer belongs to a woman - Lady Ada Lovelace (nee Byron). It was she who created the first programs for Babbage's computer. One of the computer languages ​​was subsequently named after her.

Development of the first computer analogues

In 1887, the history of the development of computer technology entered a new stage. The American engineer Herman Hollerith (Hollerith) managed to design the first electromechanical computer - the tabulator. Its mechanism had a relay, as well as counters and a special sorting box. The device read and sorted statistical records made on punched cards. Subsequently, the company founded by Hollerith became the backbone of the world-famous computer giant IBM.

In 1930, the American Vannovar Bush created a differential analyzer. It was powered by electricity, and vacuum tubes were used to store data. This machine was capable of quickly finding solutions to complex mathematical problems.

Six years later, the English scientist Alan Turing developed the concept of a machine, which became the theoretical basis for modern computers. It had all the main properties of modern computer technology: it could step-by-step perform operations that were programmed in the internal memory.

A year after this, George Stibitz, a scientist from the United States, invented the country's first electromechanical device capable of performing binary addition. His operations were based on Boolean algebra - mathematical logic created in the mid-19th century by George Boole: the use of the logical operators AND, OR and NOT. Later, the binary adder will become an integral part of the digital computer.

In 1938, Claude Shannon, an employee of the University of Massachusetts, outlined the principles of the logical design of a computer that uses electrical circuits to solve Boolean algebra problems.

The beginning of the computer era

The governments of the countries involved in World War II were aware of the strategic role of computing in the conduct of military operations. This was the impetus for the development and parallel emergence of the first generation of computers in these countries.

A pioneer in the field of computer engineering was Konrad Zuse, a German engineer. In 1941, he created the first computer controlled by a program. The machine, called the Z3, was built on telephone relays, and programs for it were encoded on perforated tape. This device was able to work in the binary system, as well as operate with floating point numbers.

The next model of Zuse's machine, the Z4, is officially recognized as the first truly working programmable computer. He also went down in history as the creator of the first high-level programming language, called Plankalküll.

In 1942, American researchers John Atanasoff (Atanasoff) and Clifford Berry created a computing device that ran on vacuum tubes. The machine also used binary code and could perform a number of logical operations.

In 1943, in an English government laboratory, in an atmosphere of secrecy, the first computer, called “Colossus,” was built. Instead of electromechanical relays, it used 2 thousand electronic tubes for storing and processing information. It was intended to crack and decrypt the code of secret messages transmitted by the German Enigma encryption machine, which was widely used by the Wehrmacht. The existence of this device was kept in the strictest confidence for a long time. After the end of the war, the order for its destruction was signed personally by Winston Churchill.

Architecture development

In 1945, the Hungarian-German American mathematician John (Janos Lajos) von Neumann created the prototype for the architecture of modern computers. He proposed writing a program in the form of code directly into the machine’s memory, implying joint storage of programs and data in the computer’s memory.

Von Neumann's architecture formed the basis for the first universal electronic computer, ENIAC, being created at that time in the United States. This giant weighed about 30 tons and was located on 170 square meters of area. 18 thousand lamps were used in the operation of the machine. This computer could perform 300 multiplication operations or 5 thousand additions in one second.

Europe's first universal programmable computer was created in 1950 in the Soviet Union (Ukraine). A group of Kyiv scientists, led by Sergei Alekseevich Lebedev, designed a small electronic calculating machine (MESM). Its speed was 50 operations per second, it contained about 6 thousand vacuum tubes.

In 1952, domestic computer technology was replenished with BESM, a large electronic calculating machine, also developed under the leadership of Lebedev. This computer, which performed up to 10 thousand operations per second, was at that time the fastest in Europe. Information was entered into the machine's memory using punched paper tape, and data was output via photo printing.

During the same period, a series of large computers was produced in the USSR under the general name “Strela” (the author of the development was Yuri Yakovlevich Bazilevsky). Since 1954, serial production of the universal computer "Ural" began in Penza under the leadership of Bashir Rameev. The latest models were hardware and software compatible with each other, there was a wide selection of peripheral devices, allowing you to assemble machines of various configurations.

Transistors. Release of the first serial computers

However, the lamps failed very quickly, making it very difficult to work with the machine. The transistor, invented in 1947, managed to solve this problem. Using the electrical properties of semiconductors, it performed the same tasks as vacuum tubes, but occupied much less space and did not consume as much energy. Along with the advent of ferrite cores for organizing computer memory, the use of transistors made it possible to significantly reduce the size of machines, make them even more reliable and faster.

In 1954, the American company Texas Instruments began mass-producing transistors, and two years later the first second-generation computer built on transistors, the TX-O, appeared in Massachusetts.

In the middle of the last century, a significant part of government organizations and large companies used computers for scientific, financial, engineering calculations, and working with large amounts of data. Gradually, computers acquired features familiar to us today. During this period, plotters, printers, and storage media on magnetic disks and tape appeared.

The active use of computer technology has led to an expansion of the areas of its application and required the creation of new software technologies. High-level programming languages ​​have appeared that make it possible to transfer programs from one machine to another and simplify the process of writing code (Fortran, Cobol and others). Special translator programs have appeared that convert code from these languages ​​into commands that can be directly perceived by the machine.

The emergence of integrated circuits

In 1958-1960, thanks to engineers from the United States Robert Noyce and Jack Kilby, the world learned about the existence of integrated circuits. Miniature transistors and other components, sometimes up to hundreds or thousands, were mounted on a silicon or germanium crystal base. The chips, just over a centimeter in size, were much faster than transistors and consumed much less power. The history of the development of computer technology connects their appearance with the emergence of the third generation of computers.

In 1964, IBM released the first computer of the SYSTEM 360 family, which was based on integrated circuits. From this time on, the mass production of computers can be counted. In total, more than 20 thousand copies of this computer were produced.

In 1972, the USSR developed the ES (unified series) computer. These were standardized complexes for the operation of computer centers that had a common command system. The American IBM 360 system was taken as the basis.

The following year, DEC released the PDP-8 minicomputer, the first commercial project in this area. The relatively low cost of minicomputers has made it possible for small organizations to use them.

During the same period, the software was constantly improved. Operating systems were developed aimed at supporting the maximum number of external devices, and new programs appeared. In 1964, they developed BASIC, a language designed specifically for training novice programmers. Five years after this, Pascal appeared, which turned out to be very convenient for solving many applied problems.

Personal computers

After 1970, production of the fourth generation of computers began. The development of computer technology at this time is characterized by the introduction of large integrated circuits into computer production. Such machines could now perform thousands of millions of computational operations in one second, and their RAM capacity increased to 500 million bits. A significant reduction in the cost of microcomputers has led to the fact that the opportunity to buy them gradually became available to the average person.

Apple was one of the first manufacturers of personal computers. Its creators, Steve Jobs and Steve Wozniak, designed the first PC model in 1976, giving it the name Apple I. It cost only $500. A year later, the next model of this company was presented - Apple II.

The computer of this time for the first time became similar to a household appliance: in addition to its compact size, it had an elegant design and a user-friendly interface. The proliferation of personal computers at the end of the 1970s led to the fact that the demand for mainframe computers fell markedly. This fact seriously worried their manufacturer, IBM, and in 1979 it released its first PC to the market.

Two years later, the company's first microcomputer with an open architecture appeared, based on the 16-bit 8088 microprocessor manufactured by Intel. The computer was equipped with a monochrome display, two drives for five-inch floppy disks, and 64 kilobytes of RAM. On behalf of the creator company, Microsoft specially developed an operating system for this machine. Numerous IBM PC clones appeared on the market, which stimulated the growth of industrial production of personal computers.

In 1984, Apple developed and released a new computer - the Macintosh. Its operating system was extremely user-friendly: it presented commands in the form of graphic images and allowed them to be entered using a mouse. This made the computer even more accessible, since now no special skills were required from the user.

Some sources date computers of the fifth generation of computing technology to 1992-2013. Briefly, their main concept is formulated as follows: these are computers created on the basis of highly complex microprocessors, having a parallel-vector structure, which makes it possible to simultaneously execute dozens of sequential commands embedded in the program. Machines with several hundred processors working in parallel make it possible to process data even more accurately and quickly, as well as create efficient networks.

The development of modern computer technology already allows us to talk about sixth generation computers. These are electronic and optoelectronic computers running on tens of thousands of microprocessors, characterized by massive parallelism and modeling the architecture of neural biological systems, which allows them to successfully recognize complex images.

Having consistently examined all stages of the development of computer technology, an interesting fact should be noted: inventions that have proven themselves well in each of them have survived to this day and continue to be used successfully.

Computer Science Classes

There are various options for classifying computers.

So, according to their purpose, computers are divided:

  • to universal ones - those that are capable of solving a wide variety of mathematical, economic, engineering, technical, scientific and other problems;
  • problem-oriented - solving problems of a narrower direction, associated, as a rule, with the management of certain processes (data recording, accumulation and processing of small amounts of information, performing calculations in accordance with simple algorithms). They have more limited software and hardware resources than the first group of computers;
  • specialized computers usually solve strictly defined tasks. They have a highly specialized structure and, with a relatively low complexity of the device and control, are quite reliable and productive in their field. These are, for example, controllers or adapters that control a number of devices, as well as programmable microprocessors.

Based on size and productive capacity, modern electronic computing equipment is divided into:

  • to ultra-large (supercomputers);
  • large computers;
  • small computers;
  • ultra-small (microcomputers).

Thus, we saw that devices, first invented by man to take into account resources and values, and then to quickly and accurately carry out complex calculations and computational operations, were constantly developing and improving.

Municipal educational institution secondary school No. 3 of Karasuk district

Subject : History of the development of computer technology.

Compiled by:

Student MOUSOSH No. 3

Kochetov Egor Pavlovich

Manager and consultant:

Serdyukov Valentin Ivanovich,

computer science teacher MOUSOSH No. 3

Karasuk 2008

Relevance

Introduction

First steps in the development of counting devices

17th century calculating devices

18th century calculating devices

19th century counting devices

Development of computing technology at the beginning of the 20th century

The emergence and development of computer technology in the 40s of the 20th century

Development of computer technology in the 50s of the 20th century

Development of computer technology in the 60s of the 20th century

Development of computer technology in the 70s of the 20th century

Development of computer technology in the 80s of the 20th century

Development of computer technology in the 90s of the 20th century

The role of computer technology in human life

My research

Conclusion

Bibliography

Relevance

Mathematics and computer science are used in all areas of the modern information society. Modern production, computerization of society, and the introduction of modern information technologies require mathematical and information literacy and competence. However, today, school courses in computer science and ICT often offer a one-sided educational approach that does not allow one to properly increase the level of knowledge due to the lack of mathematical logic necessary for complete mastery of the material. In addition, the lack of stimulation of students’ creative potential has a negative impact on motivation to learn, and as a result, on the final level of skills, knowledge and abilities. How can you study a subject without knowing its history? This material can be used in history, mathematics and computer science lessons.

Nowadays it is difficult to imagine that you can do without computers. But not so long ago, until the early 70s, computers were available to a very limited circle of specialists, and their use, as a rule, remained shrouded in secrecy and little known to the general public. However, in 1971, an event occurred that radically changed the situation and, with fantastic speed, turned the computer into an everyday working tool for tens of millions of people.

Introduction

People learned to count using their own fingers. When this was not enough, the simplest counting devices appeared. ABAK, which became widespread in the ancient world, occupied a special place among them. Then, after years of human development, the first electronic computers (computers) appeared. They not only accelerated computing work, but also gave impetus to people to create new technologies. The word “computer” means “computer”, i.e. computing device. The need to automate data processing, including calculations, arose a long time ago. Nowadays it is difficult to imagine that you can do without computers. But not so long ago, until the early 70s, computers were available to a very limited circle of specialists, and their use, as a rule, remained shrouded in secrecy and little known to the general public. However, in 1971, an event occurred that radically changed the situation and, with fantastic speed, turned the computer into an everyday work tool for tens of millions of people. In that undoubtedly significant year, the almost unknown company Intel from a small American town with the beautiful name of Santa Clara (California) released the first microprocessor. It is to him that we owe the emergence of a new class of computing systems - personal computers, which are now used by essentially everyone, from primary school students and accountants to scientists and engineers. At the end of the 20th century, it is impossible to imagine life without a personal computer. The computer has firmly entered our lives, becoming man's main assistant. Today in the world there are many computers from different companies, different complexity groups, purposes and generations. In this essay we will look at the history of the development of computer technology, as well as a brief overview of the possibilities of using modern computing systems and further trends in the development of personal computers.

First steps in the development of counting devices

The history of counting devices goes back many centuries. The oldest calculating instrument that nature itself placed at man’s disposal was his own hand. To make counting easier, people began to use the fingers of first one hand, then both, and in some tribes, their toes. In the 16th century, finger counting techniques were described in textbooks.

The next step in the development of counting was the use of pebbles or other objects, and for memorizing numbers - notches on animal bones, knots on ropes. The so-called “Vestonitsa bone” with notches discovered in excavations allows historians to assume that even then, 30 thousand years BC, our ancestors were familiar with the rudiments of counting:


The early development of written counting was hampered by the complexity of arithmetic operations in the multiplication of numbers that existed at that time. In addition, few people knew how to write and there was no educational material for writing - parchment began to be produced around the 2nd century BC, papyrus was too expensive, and clay tablets were inconvenient to use.

These circumstances explain the appearance of a special calculating device - the abacus. By the 5th century BC. abacus became widespread in Egypt, Greece, and Rome. It was a board with grooves in which, according to the positional principle, some objects were placed - pebbles, bones.


An abacus-like instrument was known among all nations. The ancient Greek abacus (board or "Salaminian board" named after the island of Salamis in the Aegean Sea) was a plank sprinkled with sea sand. There were grooves in the sand, on which numbers were marked with pebbles. One groove corresponded to units, the other to tens, etc. If more than 10 pebbles were collected in any groove when counting, they were removed and one pebble was added in the next rank.

The Romans improved the abacus, moving from wooden planks, sand and pebbles to marble planks with chiseled grooves and marble balls. Later, around 500 AD, the abacus was improved and an abacus was born, a device consisting of a set of knuckles strung on rods. The Chinese abacus suan-pan consisted of a wooden frame divided into upper and lower sections. The sticks correspond to the columns, and the beads correspond to numbers. For the Chinese, counting was based not on ten, but on five.


It is divided into two parts: in the lower part there are 5 seeds on each row, in the upper part there are two. Thus, in order to set the number 6 on these abacuses, they first placed the bone corresponding to the five, and then added one to the units digit.


The Japanese called the same device for counting serobyan:


In Rus', for a long time, they counted by bones placed in piles. Around the 15th century, the “plank abacus” became widespread, which was almost no different from ordinary abacus and consisted of a frame with reinforced horizontal ropes on which drilled plum or cherry pits were strung.


Around the 6th century. AD In India, very advanced ways of writing numbers and rules for performing arithmetic operations, now called the decimal number system, were formed. When writing a number that lacks any digit (for example, 101 or 1204), the Indians said the word “empty” instead of the name of the number. When recording, a dot was placed in place of the “empty” digit, and later a circle was drawn. Such a circle was called “sunya” - in Hindi it meant “empty space”. Arab mathematicians translated this word into its own language - they said "sifr". The modern word “zero” was born relatively recently - later than “digit”. It comes from the Latin word "nihil" - "no". Around 850 AD. Arab scientist mathematician Muhammad ben Musa al-Khorezm (from the city of Khorezm on the Amu Darya River) wrote a book about the general rules for solving arithmetic problems using equations. It was called "Kitab al-Jabr". This book gave its name to the science of algebra. Another book by al-Khwarizmi played a very important role, in which he described Indian arithmetic in detail. Three hundred years later (in 1120) this book was translated into Latin, and it became the first a textbook of “Indian” (that is, our modern) arithmetic for all European cities.


We owe the appearance of the term “algorithm” to Muhammad ben Musa al-Khorezm.

At the end of the 15th century, Leonardo da Vinci (1452-1519) created a sketch of a 13-bit adding device with ten-tooth rings. But da Vinci’s manuscripts were discovered only in 1967, so the biography of mechanical devices comes from Pascal’s adding machine. Based on his drawings, today an American computer manufacturing company has built a working machine for advertising purposes.

17th century calculating devices


In 1614, Scottish mathematician John Naiper (1550-1617) invented logarithm tables. Their principle is that each number corresponds to a special number - a logarithm - an exponent to which the number must be raised (the base of the logarithm) to obtain a given number. Any number can be expressed this way. Logarithms make division and multiplication very simple. To multiply two numbers, simply add their logarithms. Thanks to this property, the complex multiplication operation is reduced to a simple addition operation. To simplify, tables of logarithms were compiled, which were later built into a device that could significantly speed up the calculation process - a slide rule.


Napier proposed in 1617 another (non-logarithmic) method of multiplying numbers. The instrument, called the Napier stick (or knuckle), consisted of thin plates, or blocks. Each side of the block carries numbers that form a mathematical progression.


Block manipulation allows you to extract square and cube roots, as well as multiply and divide large numbers.


Wilhelm Schickard

In 1623, Wilhelm Schickard, an orientalist and mathematician, professor at the University of Tyubin, in letters to his friend Johannes Kepler, described the design of a “counting clock” - a calculating machine with a device for setting numbers and rollers with a slider and a window for reading the result. This machine could only add and subtract (some sources say that this machine could also multiply and divide). This was the first mechanical car. In our time, according to his description, its model has been built:

Blaise Pascal


In 1642, the French mathematician Blaise Pascal (1623-1662) designed a calculating device to make the work of his father, a tax inspector, easier. This device made it possible to add decimal numbers. Externally, it looked like a box with numerous gears.


The basis of the adding machine was the counter-recorder, or counting gear. It had ten protrusions, each of which had numbers written on it. To transmit tens, there was one elongated tooth on the gear, which engaged and turned the intermediate gear, which transmitted rotation to the tens gear. An additional gear was needed to ensure that both counting gears - ones and tens - rotated in the same direction. The counting gear was connected to the lever using a ratchet mechanism (transmitting forward movement and not transmitting reverse movement). Deflection of the lever to one angle or another made it possible to enter single-digit numbers into the counter and sum them up. In Pascal's machine, a ratchet drive was attached to all the counting gears, which made it possible to add multi-digit numbers.

In 1642, the British Robert Bissacar, and in 1657 - independently - S. Partridge developed a rectangular slide rule, the design of which has largely survived to this day.


In 1673, the German philosopher, mathematician, physicist Gottfried Wilhelm Leibniz (Gottfried Wilhelm Leibniz, 1646-1716) created a “step calculator” - a calculating machine that allows you to add, subtract, multiply, divide, extract square roots, using the binary number system .

It was a more advanced device that used a moving part (a prototype of a carriage) and a handle with which the operator rotated the wheel. Leibniz's product suffered the sad fate of its predecessors: if anyone used it, it was only Leibniz's family and friends of his family, since the time of mass demand for such mechanisms had not yet come.

The machine was the prototype of the adding machine, used from 1820 to the 60s of the twentieth century.

Calculating devices from the 18th century.


In 1700, Charles Perrault published “A Collection of a Large Number of Machines of Claude Perrault’s Own Invention,” in which among the inventions of Claude Perrault (Charles Perrault’s brother) there is a adding machine in which gear racks are used instead of gears. The machine was called the "Rhabdological Abacus". This device was named so because the ancients called abacus a small board on which numbers are written, and Rhabdology - the science of performing

arithmetic operations using small sticks with numbers.


In 1703, Gottfried Wilhelm Leibniz wrote a treatise "Expication de l"Arithmetique Binary" - on the use of the binary number system in computers. His first works on binary arithmetic date back to 1679.

A member of the Royal Society of London, the German mathematician, physicist, and astronomer Christian Ludwig Gersten invented an arithmetic machine in 1723, and two years later he manufactured it. The Gersten machine is remarkable in that it is the first to use a device for calculating the quotient and the number of successive addition operations required when multiplying numbers, and also provides the ability to control the correctness of entering (setting) the second addend, which reduces the likelihood of subjective error associated with the fatigue of the calculator.

In 1727, Jacob Leupold created a calculating machine that used the Leibniz machine principle.

In the report of the commission of the Paris Academy of Sciences, published in 1751 in the Journal of Scientists, there are remarkable lines: “The results of Mr. Pereira’s method that we have seen are quite enough to once again confirm the opinion ... that this method of teaching the deaf-mutes is extremely practical and that the person who used it with such success is worthy of praise and encouragement... In speaking of the progress which Mr. Pereira's pupil made in a very short time in the knowledge of numbers, we must add that Mr. Pereira used the Arithmetic Engine, which he himself invented." This arithmetic machine is described in the "Journal of Scientists", but, unfortunately, the journal does not contain drawings. This calculating machine used some ideas borrowed from Pascal and Perrault, but overall it was a completely original design. It differed from known machines in that its counting wheels were not located on parallel axes, but on a single axis passing through the entire machine. This innovation, which made the design more compact, was subsequently widely used by other inventors - Felt and Odner.

In the second half of the 17th century (no later than 1770), a summing machine was created in the city of Nesvizh. The inscription on this machine states that it was “invented and manufactured by the Jew Evna Jacobson, a watchmaker and mechanic in the city of Nesvizh in Lithuania,” “Minsk Voivodeship.” This machine is currently in the collection of scientific instruments of the M.V. Lomonosov Museum (St. Petersburg). An interesting feature of the Jacobson machine was a special device that made it possible to automatically count the number of subtractions made, in other words, to determine the quotient. The presence of this device, an ingeniously solved problem of entering numbers, the ability to record intermediate results - all this allows us to consider the “watchmaker from Nesvizh” an outstanding designer of calculating equipment.


In 1774, rural pastor Philip Matthaos Hahn developed the first working calculating machine. He managed to build and, most incredibly, sell a small number of calculating machines.

In 1775, in England, Count Steinhope created a calculating device in which new mechanical systems were not implemented, but this device was more reliable in operation.


Calculating devices from the 19th century.

In 1804, French inventor Joseph-Marie Jacquard (1752-1834) came up with a way to automatically control the thread when working on a weaving loom. The method consisted of using special cards with holes drilled in the right places (depending on the pattern that was supposed to be applied to the fabric). Thus, he designed a spinning machine, the operation of which could be programmed using special cards. The operation of the machine was programmed using a whole deck of punched cards, each of which controlled one shuttle stroke. When moving on to a new drawing, the operator simply replaced one deck of punched cards with another. The creation of a loom controlled by cards with holes punched on them and connected to each other in the form of a tape is one of the key discoveries that determined the further development of computer technology.

Charles Xavier Thomas

Charles Xavier Thomas (1785-1870) in 1820 created the first mechanical calculator that could not only add and multiply, but also subtract and divide. The rapid development of mechanical calculators led to the addition of a number of useful functions by 1890: storing intermediate results and using them in subsequent operations, printing the result, etc. The creation of inexpensive, reliable machines made it possible to use these machines for commercial purposes and scientific calculations.

Charles Babbage

In 1822 English mathematician Charles Babbage (1792-1871) put forward the idea of ​​​​creating a program-controlled calculating machine with an arithmetic device, control device, input and printing.

The first machine Babbage designed, the Difference Engine, was powered by a steam engine. She calculated tables of logarithms using the method of constant differentiation and recorded the results on a metal plate. The working model he created in 1822 was a six-digit calculator capable of performing calculations and printing numerical tables.

Ada Lovelace

Lady Ada Lovelace (Ada Byron, Countess of Lovelace, 1815-1852) worked simultaneously with the English scientist. She developed the first programs for the machine, laid down many ideas and introduced a number of concepts and terms that have survived to this day.

Babbage's Analytical Engine was built by enthusiasts from the London Science Museum. It consists of four thousand iron, bronze and steel parts and weighs three tons. True, it is very difficult to use - with each calculation you have to turn the machine handle several hundred (or even thousands) times.

The numbers are written (typed) on disks arranged vertically and set to positions 0 to 9. The motor is driven by a sequence of punched cards containing instructions (program).

First telegraph

The first electric telegraph was created in 1937 by English inventors William Cook (1806-1879) and Charles Wheatstone (1802-1875). An electric current was sent through the wires to the receiver. The signals activated arrows on the receiver, which pointed to different letters and thus conveyed messages.

American artist Samuel Morse (1791-1872) invented a new telegraph code that replaced the Cook and Wheatstone code. He developed dots and dashes for each letter. Morse staged a demonstration of his code by laying a 6 km telegraph wire from Baltimore to Washington and transmitting news of the presidential election over it.

Later (in 1858), Charles Wheatstone created a system in which an operator, using Morse code, typed messages onto a long paper tape that fed into a telegraph machine. At the other end of the line, the recorder was typing the received message onto another paper tape. The productivity of telegraph operators increases tenfold - messages are now sent at a speed of one hundred words per minute.

In 1846, the Kummer calculator appeared, which was mass-produced for more than 100 years - until the seventies of the twentieth century. Calculators have now become an integral attribute of modern life. But when there were no calculators, the Kummer calculator was in use, which, at the whim of the designers, later turned into “Addiator”, “Products”, “Arithmetic Ruler” or “Progress”. This wonderful device, created in the mid-19th century, according to its manufacturer, could be made the size of a playing card, and therefore could easily fit in a pocket. The device of Kummer, a St. Petersburg music teacher, stood out among those previously invented for its portability, which became its most important advantage. Kummer's invention looked like a rectangular board with figured slats. Addition and subtraction were carried out through the simplest movement of slats. It is interesting that Kummer's calculator, presented in 1946 to the St. Petersburg Academy of Sciences, was focused on monetary calculations.

In Russia, in addition to the Slonimsky device and modifications of the Kummer numerator, the so-called counting bars, invented in 1881 by the scientist Ioffe, were quite popular.

George Boole

In 1847, the English mathematician George Boole (1815-1864) published the work "Mathematical Analysis of Logic". This is how a new branch of mathematics appeared. It was called Boolean algebra. Each value in it can take only one of two values: true or false, 1 or 0. This algebra was very useful to the creators of modern computers. After all, the computer understands only two symbols: 0 and 1. He is considered the founder of modern mathematical logic.

1855 Brothers George & Edvard Scheutz from Stockholm built the first mechanical computer using the work of Ch. Babbage.

In 1867, Bunyakovsky invented self-calculators, which were based on the principle of connected digital wheels (Pascal's gear).

In 1878, the English scientist Joseph Swan (1828-1914) invented the electric light bulb. It was a glass flask with a carbon filament inside. To prevent the thread from burning out, Swan removed the air from the flask.

The following year, American inventor Thomas Edison (1847-1931) also invented the light bulb. In 1880, Edison began producing safety light bulbs, selling them for $2.50. Subsequently, Edison and Swan created a joint company, Edison and Swan United Electric Light Company.

In 1883, while experimenting with a lamp, Edison inserted a platinum electrode into a vacuum cylinder, applied voltage and, to his surprise, discovered that current flowed between the electrode and the carbon filament. Since at that moment Edison’s main goal was to extend the life of the incandescent lamp, this result interested him little, but the enterprising American still received a patent. The phenomenon known to us as thermionic emission was then called the “Edison effect” and was forgotten for some time.

Vilgodt Teofilovich Odner

In 1880 Vilgodt Teofilovich Odner, a Swede by nationality, who lived in St. Petersburg, designed an adding machine. It must be admitted that before Odner there were also adding machines - the systems of K. Thomas. However, they were unreliable, large in size and inconvenient to operate.

He began working on the adding machine in 1874, and in 1890 he began mass production of them. Their modification "Felix" was produced until the 50s. The main feature of Odhner's brainchild is the use of gear wheels with a variable number of teeth (this wheel bears Odhner's name) instead of Leibniz's stepped rollers. It is structurally simpler than a roller and has smaller dimensions.

Herman Hollerith

In 1884, American engineer Herman Hillerith (1860-1929) took out a patent “for a census machine” (statistical tabulator). The invention included a punched card and a sorting machine. Hollerith's punch card turned out to be so successful that it has existed to this day without the slightest changes.

The idea of ​​putting data on punched cards and then reading and processing them automatically belonged to John Billings, and its technical solution belonged to Herman Hollerith.

The tabulator accepted cards the size of a dollar bill. There were 240 positions on the cards (12 rows of 20 positions). When reading information from punched cards, 240 needles pierced these cards. Where the needle entered the hole, it closed an electrical contact, as a result of which the value in the corresponding counter increased by one.

Development of computer technology

at the beginning of the 20th century

1904 The famous Russian mathematician, shipbuilder, academician A.N. Krylov proposed the design of a machine for integrating ordinary differential equations, which was built in 1912.

English physicist John Ambrose Fleming (1849-1945), studying the "Edison effect", creates a diode. Diodes are used to convert radio waves into electrical signals that can be transmitted over long distances.

Two years later, through the efforts of the American inventor Lee di Forest, triodes appeared.

1907 American engineer J. Power designed an automatic card punch.

St. Petersburg scientist Boris Rosing applies for a patent for a cathode ray tube as a data receiver.

1918 The Russian scientist M.A. Bonch-Bruevich and the English scientists V. Iccles and F. Jordan (1919) independently created an electronic device, called a trigger by the British, which played a big role in the development of computer technology.

In 1930, Vannevar Bush (1890-1974) designs a differential analyzer. In fact, this is the first successful attempt to create a computer capable of performing cumbersome scientific calculations. Bush's role in the history of computer technology is very large, but his name most often appears in connection with the prophetic article "As We May Think" (1945), in which he describes the concept of hypertext.

Konrad Zuse created the Z1 computer, which had a keyboard for entering problem conditions. Upon completion of the calculations, the result was displayed on a panel with many small lights. The total area occupied by the machine was 4 sq.m.

Konrad Zuse patented a method for automatic calculations.

For the next model Z2, K. Zuse came up with a very ingenious and cheap input device: Zuse began encoding instructions for the machine by punching holes in used 35 mm photographic film.

In 1838 American mathematician and engineer Claude Shannon and Russian scientist V.I. Shestakov in 1941 showed the possibility of a mathematical logic apparatus for the synthesis and analysis of relay contact switching systems.

In 1938, the telephone company Bell Laboratories created the first binary adder (an electrical circuit that performed binary addition) - one of the main components of any computer. The author of the idea was George Stibits, who experimented with Boolean algebra and various parts - old relays, batteries, light bulbs and wiring. By 1940, a machine was born that could perform four arithmetic operations on complex numbers.

Appearance and

in the 40s of the 20th century.

In 1941, IBM engineer B. Phelps began work on creating decimal electronic counters for tabulators, and in 1942 he created an experimental model of an electronic multiplying device. In 1941, Konrad Zuse built the world's first operational program-controlled relay binary computer, the Z3.

Simultaneously with the construction of ENIAC, also in secrecy, a computer was created in Great Britain. Secrecy was necessary because a device was being designed to decipher the codes used by the German armed forces during the Second World War. The mathematical decryption method was developed by a group of mathematicians, including Alan Turing. During 1943, the Colossus machine was built in London using 1,500 vacuum tubes. The developers of the machine are M. Newman and T. F. Flowers.

Although both ENIAC and Colossus ran on vacuum tubes, they essentially copied electromechanical machines: new content (electronics) was squeezed into an old form (the structure of pre-electronic machines).

In 1937, Harvard mathematician Howard Aiken proposed a project to create a large calculating machine. The work was sponsored by IBM President Thomas Watson, who invested $500 thousand in it. Design of the Mark-1 began in 1939; the computer was built by the New York company IBM. The computer contained about 750 thousand parts, 3304 relays and more than 800 km of wires.

In 1944, the finished machine was officially transferred to Harvard University.

In 1944, American engineer John Presper Eckert first put forward the concept of a program stored in computer memory.

Aiken, who had the intellectual resources of Harvard and a capable Mark-1 machine, received several orders from the military. So the next model, the Mark-2, was ordered by the US Navy Weapons Directorate. Design began in 1945, and construction ended in 1947. The Mark-2 was the first multitasking machine—multiple buses made it possible to simultaneously transmit multiple numbers from one part of the computer to another.

In 1948, Sergei Aleksandrovich Lebedev (1990-1974) and B.I. Rameev proposed the first project of a domestic digital electronic computer. Under the leadership of Academician Lebedev S.A. and Glushkova V.M. domestic computers are being developed: first MESM - small electronic calculating machine (1951, Kyiv), then BESM - high-speed electronic calculating machine (1952, Moscow). In parallel with them, Strela, Ural, Minsk, Hrazdan, and Nairi were created.

In 1949 An English stored program machine, EDSAC (Electronic Delay Storage Automatic Computer), was put into operation, designed by Maurice Wilkes from the University of Cambridge. The EDSAC computer contained 3,000 vacuum tubes and was six times more productive than its predecessors. Maurice Wilkis introduced a system of mnemonics for machine instructions called assembly language.

In 1949 John Mauchly created the first programming language interpreter called "Short Order Code".

Development of computer technology

in the 50s of the 20th century.

In 1951, work was completed on the creation of UNIVAC (Universal Automatic Computer). The first example of the UNIVAC-1 machine was built for the US Census Bureau. The UNIVAC-1 synchronous, sequential computer was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. The internal storage device, with a capacity of 1000 twelve-bit decimal numbers, was made on 100 mercury delay lines.

This computer is interesting because it was aimed at relatively mass production without changing the architecture and special attention was paid to the peripheral part (input-output facilities).

Jay Forrester patented magnetic core memory. For the first time such memory was used on the Whirlwind-1 machine. It consisted of two cubes with 32x32x17 cores, which provided storage of 2048 words for 16-bit binary numbers with one parity bit.

This machine was the first to use a universal non-specialized bus (the relationships between various computer devices become flexible) and two devices were used as input-output systems: a Williams cathode ray tube and a typewriter with punched paper tape (flexowriter).

"Tradis", released in 1955. - the first transistor computer from Bell Telephone Laboratories - contained 800 transistors, each of which was enclosed in a separate housing.

In 1957 In the IBM 350 RAMAC model, disk memory (magnetized aluminum disks with a diameter of 61 cm) appeared for the first time.

G. Simon, A. Newell, J. Shaw created GPS - a universal problem solver.

In 1958 Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor independently invent the integrated circuit.

1955-1959 Russian scientists A.A. Lyapunov, S.S. Kamynin, E.Z. Lyubimsky, A.P. Ershov, L.N. Korolev, V.M. Kurochkin, M.R. Shura-Bura and others created “programming programs” - prototypes of translators. V.V. Martynyuk created a symbolic coding system - a means of accelerating the development and debugging of programs.

1955-1959 The foundation was laid for programming theory (A.A. Lyapunov, Yu.I. Yanov, A.A. Markov, L.A. Kaluzhin) and numerical methods (V.M. Glushkov, A.A. Samarsky, A.N. Tikhonov ). Schemes of the mechanism of thinking and genetic processes, algorithms for diagnosing medical diseases are modeled (A.A. Lyapunov, B.V. Gnedenko, N.M. Amosov, A.G. Ivakhnenko, V.A. Kovalevsky, etc.).

1959 Under the leadership of S.A. Lebedev created the BESM-2 machine with a productivity of 10 thousand operations/s. Its use is associated with calculations of launches of space rockets and the world's first artificial Earth satellites.

1959 The M-20 machine was created, chief designer S.A. Lebedev. For its time, one of the fastest in the world (20 thousand operations/s). This machine was used to solve most theoretical and applied problems related to the development of the most advanced fields of science and technology of that time. Based on the M-20, the unique multiprocessor M-40 was created - the fastest computer of that time in the world (40 thousand operations/sec.). The M-20 was replaced by the semiconductor BESM-4 and M-220 (200 thousand operations/s).

Development of computer technology

in the 60s of the 20th century.

In 1960, for a short time, the CADASYL (Conference on Data System Languages) group, led by Joy Wegstein and with the support of IBM, developed a standardized business programming language, COBOL (Common business oriented language). This language is focused on solving economic problems, or more precisely, on processing information.

In the same year, J. Schwartz and others from the company System Development developed the Jovial programming language. The name comes from Jule's Own Version of International Algorithmic Language. Procedural Java, version of Algol-58. Used mainly for military applications by the US Air Force.

IBM has developed a powerful computing system called Stretch (IBM 7030).

1961 IBM Deutschland implemented the connection of a computer to a telephone line using a modem.

Also, American professor John McCartney developed the LISP (List procssing language) language.

J. Gordon, head of the development of simulation systems at IBM, created the GPSS (General Purpose Simulation System) language.

Employees of the University of Manchester under the leadership of T. Kilburn created the Atlas computer, which for the first time implemented the concept of virtual memory. The first minicomputer (PDP-1) appeared before 1971, the time of the creation of the first microprocessor (Intel 4004).

In 1962, R. Griswold developed the programming language SNOBOL, focused on string processing.

Steve Russell developed the first computer game. What kind of game it was, unfortunately, is not known.

E.V. Evreinov and Yu. Kosarev proposed a model of a team of computers and substantiated the possibility of building supercomputers on the principles of parallel execution of operations, variable logical structure and structural homogeneity.

IBM released the first external memory devices with removable disks.

Kenneth E. Iverson (IBM) published a book called “A Programming Language” (APL). Initially, this language served as a notation for writing algorithms. The first implementation of APL/360 was in 1966 by Adin Falkoff (Harvard, IBM). There are versions of interpreters for PC. Due to the difficulty of reading nuclear submarine programs, it is sometimes called “Chinese BASIC”. Actually, it is a procedural, very compact, ultra-high-level language. Requires a special keyboard. Further development – ​​APL2.

1963 The American standard code for information exchange has been approved - ASCII (American Standard Code Informatio Interchange).

General Electric created the first commercial DBMS (database management system).

1964 U. Dahl and K. Nygort created the SIMULA-1 modeling language.

In 1967 under the leadership of S.A. Lebedev and V.M. Melnikov, a high-speed computing machine BESM-6 was created at ITM and VT.

It was followed by "Elbrus" - a new type of computer with a productivity of 10 million operations/s.

Development of computer technology

in the 70s of the 20th century.

In 1970 Charles Murr, an employee of the National Radio Astronomy Observatory, created the FORT programming language.

Denis Ritchie and Kenneth Thomson release the first version of Unix.

Dr. Codd publishes the first paper on the relational data model.

In 1971 Intel (USA) created the first microprocessor (MP) - a programmable logical device made using VLSI technology.

The 4004 processor was 4-bit and could perform 60 thousand operations per second.

1974 Intel developed the first universal eight-bit microprocessor, the 8080, with 4500 transistors. Edward Roberts from MITS built the first personal computer, Altair, on a new chip from Intel, the 8080. Altair turned out to be the first mass-produced PC, essentially marking the beginning of an entire industry. The kit included a processor, a 256-byte memory module, a system bus and some other little things.

Young programmer Paul Allen and Harvard University student Bill Gates implemented the BASIC language for Altair. They subsequently founded Microsoft, which is today the largest software manufacturer.

Development of computer technology

in the 80s of the 20th century.

1981 Compaq released the first Laptop.

Niklaus Wirth developed the MODULA-2 programming language.

The first portable computer was created - Osborne-1, weighing about 12 kg. Despite a fairly successful start, the company went bankrupt two years later.

1981 IBM released the first personal computer, the IBM PC, based on the 8088 microprocessor.

1982 Intel released the 80286 microprocessor.

The American computer manufacturing company IBM, which previously occupied a leading position in the production of large computers, began producing professional personal computers IBM PC with the MS DOS operating system.

Sun began producing the first workstations.

Lotus Development Corp. released the Lotus 1-2-3 spreadsheet.

The English company Inmos, based on the ideas of Oxford University professor Tony Hoare about “interacting sequential processes” and the concept of the experimental programming language David May, created the OCCAM language.

1985 Intel released a 32-bit microprocessor 80386, consisting of 250 thousand transistors.

Seymour Cray created the CRAY-2 supercomputer with a capacity of 1 billion operations per second.

Microsoft released the first version of the Windows graphical operating environment.

The emergence of a new programming language, C++.

Development of computer technology

in the 90s of the 20th century.

1990 Microsoft released Windows 3.0.

Tim Berners-Lee developed the HTML language (Hypertext Markup Language; the main format of Web documents) and the prototype of the World Wide Web.

Cray released the Cray Y-MP C90 supercomputer with 16 processors and a speed of 16 Gflops.

1991 Microsoft released Windows 3.1.

JPEG graphic format developed

Philip Zimmerman invented PGP, a public key message encryption system.

1992 The first free operating system with great capabilities appeared - Linux. Finnish student Linus Torvalds (the author of this system) decided to experiment with the commands of the Intel 386 processor and posted what he got on the Internet. Hundreds of programmers from around the world began to add and rework the program. It has evolved into a fully functional working operating system. History is silent about who decided to call it Linux, but how this name came about is quite clear. "Linu" or "Lin" on behalf of the creator and "x" or "ux" - from UNIX, because the new OS was very similar to it, only it now worked on computers with x86 architecture.

DEC introduced the first 64-bit RISC Alpha processor.

1993 Intel released a 64-bit Pentium microprocessor, which consisted of 3.1 million transistors and could perform 112 million operations per second.

The MPEG video compression format has appeared.

1994 Start of release by Power Mac of the Apple Computers series - Power PC.

1995 DEC announced the release of five new models of Celebris XL personal computers.

NEC announced the completion of development of the world's first chip with a memory capacity of 1 GB.

The Windows 95 operating system appeared.

SUN introduced the Java programming language.

The RealAudio format has appeared - an alternative to MPEG.

1996 Microsoft released Internet Explorer 3.0, a fairly serious competitor to Netscape Navigator.

1997 Apple released the Macintosh OS 8 operating system.

Conclusion

The personal computer quickly entered our lives. Just a few years ago it was rare to see some kind of personal computer - they existed, but they were very expensive, and not even every company could have a computer in their office. Now every third home has a computer, which has already become deeply embedded in human life.

Modern computers represent one of the most significant achievements of human thought, the influence of which on the development of scientific and technological progress can hardly be overestimated. The scope of computer applications is enormous and is constantly expanding.

My research

Number of computers owned by students at school in 2007.

Number of students

Have computers

Percentage of total quantity

Number of computers owned by students at school in 2008.

Number of students

Have computers

Percentage of total quantity

Increase in the number of computers among students:

The rise of computers in school

Conclusion

Unfortunately, it is impossible to cover the entire history of computers within the framework of an abstract. We could talk for a long time about how in the small town of Palo Alto (California) at the Xerox PARK research center, the cream of the programmers of that time gathered to develop revolutionary concepts that radically changed the image of cars and pave the way for computers the end of the 20th century. As a talented schoolboy, Bill Gates and his friend Paul Allen met Ed Robertson and created the amazing BASIC language for the Altair computer, which made it possible to develop application programs for it. As the appearance of the personal computer gradually changed, a monitor and keyboard appeared, a floppy disk drive, the so-called floppy disks, and then a hard drive. A printer and a mouse became integral accessories. One could talk about the invisible war in the computer markets for the right to set standards between the huge corporation IBM, and the young Apple, which dared to compete with it, forcing the whole world to decide which is better, Macintosh or PC? And about many other interesting things that happened quite recently, but have already become history.

For many, a world without a computer is a distant history, about as distant as the discovery of America or the October Revolution. But every time you turn on the computer, it is impossible to stop being amazed at the human genius that created this miracle.

Modern personal IBM PC-compatible computers are the most widely used type of computer, their power is constantly growing, and their scope is expanding. These computers can be networked together, allowing tens or hundreds of users to easily exchange information and simultaneously access databases. Electronic mail allows computer users to send text and fax messages to other cities and countries using the regular telephone network and retrieve information from large data banks. The global electronic communication system Internet provides an extremely low cost opportunity to quickly receive information from all corners of the globe, provides voice and fax communication capabilities, and facilitates the creation of intracorporate information transmission networks for companies with branches in different cities and countries. However, the capabilities of IBM PC - compatible personal computers for processing information are still limited, and their use is not justified in all situations.

To understand the history of computer technology, the reviewed abstract has at least two aspects: first, all activities related to automatic computing before the creation of the ENIAC computer were considered as prehistory; second, the development of computer technology is defined only in terms of hardware technology and microprocessor circuits.

Bibliography:

1. Guk M. “IBM PC Hardware” - St. Petersburg: “Peter”, 1997.

2. Ozertsovsky S. “Intel microprocessors: from 4004 to Pentium Pro”, Computer Week magazine #41 –

3. Figurnov V.E. “IBM PC for the user” - M.: “Infra-M”, 1995.

4. Figurnov V.E. “IBM PC for the user. Short course" - M.: 1999.

5. 1996 Frolov A.V., Frolov G.V. “IBM PC Hardware” - M.: DIALOG-MEPhI, 1992.

PC BASICS

People have always felt the need to count. To do this, they used their fingers, pebbles, which they put in piles or placed in a row. The number of objects was recorded using lines that were drawn along the ground, using notches on sticks and knots that were tied on a rope.

With the increase in the number of objects to be counted and the development of sciences and crafts, the need arose to carry out simple calculations. The most ancient instrument known in various countries is the abacus (in Ancient Rome they were called calculi). They allow you to perform simple calculations on large numbers. The abacus turned out to be such a successful tool that it has survived from ancient times almost to the present day.

No one can name the exact time and place of the appearance of the bills. Historians agree that their age is several thousand years, and their homeland may be Ancient China, Ancient Egypt, and Ancient Greece.

1.1. SHORT STORY

COMPUTING EQUIPMENT DEVELOPMENTS

With the development of exact sciences, an urgent need arose to carry out a large number of precise calculations. In 1642, French mathematician Blaise Pascal constructed the first mechanical adding machine, known as Pascal's adding machine (Figure 1.1). This machine was a combination of interlocking wheels and drives. The wheels were marked with numbers from 0 to 9. When the first wheel (units) made a full revolution, the second wheel (tens) was automatically activated; when it reached the number 9, the third wheel began to rotate, etc. Pascal's machine could only add and subtract.

In 1694, the German mathematician Gottfried Wilhelm von Leibniz designed a more advanced calculating machine (Fig. 1.2). He was convinced that his invention would find wide application not only in science, but also in everyday life. Unlike Pascal's machine, Leibniz used cylinders rather than wheels and drives. The cylinders were marked with numbers. Each cylinder had nine rows of projections or teeth. In this case, the first row contained 1 protrusion, the second - 2, and so on until the ninth row, which contained 9 protrusions. The cylinders were movable and were brought into a certain position by the operator. The design of Leibniz's machine was more advanced: it was capable of performing not only addition and subtraction, but also multiplication, division and even square root extraction.

Interestingly, the descendants of this design survived until the 70s of the 20th century. in the form of mechanical calculators (Felix type adding machine) and were widely used for various calculations (Fig. 1.3). However, already at the end of the 19th century. With the invention of the electromagnetic relay, the first electromechanical counting devices appeared. In 1887, Herman Hollerith (USA) invented an electromechanical tabulator with numbers entered using punched cards. The idea of ​​using punch cards was inspired by the punching of railway tickets with a puncher. The 80-column punched card he developed did not undergo significant changes and was used as an information carrier in the first three generations of computers. Hollerith tabulators were used during the 1st population census in Russia in 1897. The inventor himself then made a special visit to St. Petersburg. Since that time, electromechanical tabulators and other similar devices have become widely used in accounting.

At the beginning of the 19th century. Charles Babbage formulated the basic principles that should underlie the design of a fundamentally new type of computer.

In such a machine, in his opinion, there should be a “warehouse” for storing digital information, a special device that carries out operations on numbers taken from the “warehouse.” Babbage called such a device a “mill.” Another device is used to control the sequence of operations, transfer of numbers from the “warehouse” to the “mill” and back, and finally, the machine must have a device for inputting initial data and outputting calculation results. This machine was never built - only models of it existed (Fig. 1.4), but the principles underlying it were later implemented in digital computers.

Babbage's scientific ideas captivated the daughter of the famous English poet Lord Byron, Countess Ada Augusta Lovelace. She laid down the first fundamental ideas about the interaction of various blocks of a computer and the sequence of solving problems on it. Therefore, Ada Lovelace is rightfully considered the world's first programmer. Many of the concepts introduced by Ada Lovelace in the descriptions of the world's first programs are widely used by modern programmers.

Rice. 1.1. Pascal's summing machine

Rice. 1.2. Leibniz calculating machine

Rice. 1.3. Felix adding machine

Rice. 1.4. Babbage's machine

The beginning of a new era in the development of computer technology based on electromechanical relays was in 1934. The American company IBM (International Business Machines) began producing alphanumeric tabulators capable of performing multiplication operations. In the mid-30s of the XX century. based on tabulators, a prototype of the first local computer network is created. In Pittsburgh (USA), a department store installed a system consisting of 250 terminals connected by telephone lines with 20 tabulators and 15 typewriters for payments to customers. In 1934 - 1936 German engineer Konrad Zuse came up with the idea of ​​​​creating a universal computer with program control and storage of information in a memory device. He designed the Z-3 machine - it was the first program-controlled computer - the prototype of modern computers (Fig. 1.5).


Rice. 1.5. Zuse computer

It was a relay machine using a binary number system, having a memory for 64 floating point numbers. The arithmetic block used parallel arithmetic. The team included operational and address parts. Data entry was carried out using a decimal keyboard, digital output was provided, as well as automatic conversion of decimal numbers to binary and vice versa. The speed of the addition operation is three operations per second.

In the early 40s of the XX century. In the laboratories of IBM, together with scientists from Harvard University, the development of one of the most powerful electromechanical computers began. It was called MARK-1, contained 760 thousand components and weighed 5 tons (Fig. 1.6).

Rice. 1.6. Calculating machineMARK-1

The last largest project in the field of relay computing technology (CT) should be considered the RVM-1, built in 1957 in the USSR, which was quite competitive with the computers of that time for a number of tasks. However, with the advent of the vacuum tube, the days of electromechanical devices were numbered. Electronic components had great superiority in speed and reliability, which determined the future fate of electromechanical computers. The era of electronic computers has arrived.

The transition to the next stage in the development of computer technology and programming technology would be impossible without fundamental scientific research in the field of information transmission and processing. The development of information theory is associated primarily with the name of Claude Shannon. Norbert Wiener is rightfully considered the father of cybernetics, and Heinrich von Neumann is the creator of the theory of automata.

The concept of cybernetics was born from the synthesis of many scientific directions: firstly, as a general approach to the description and analysis of the actions of living organisms and computers or other automata; secondly, from the analogies between the behavior of communities of living organisms and human society and the possibility of their description using a general theory of control; and, finally, from the synthesis of information transfer theory and statistical physics, which led to the most important discovery linking the amount of information and negative entropy in a system. The term “cybernetics” itself comes from the Greek word meaning “helmsman”; it was first used by N. Wiener in the modern sense in 1947. N. Wiener’s book, in which he formulated the basic principles of cybernetics, is called “Cybernetics or control and communication in animal and car."

Claude Shannon is an American engineer and mathematician, the man who is called the father of modern information theory. He proved that the operation of switches and relays in electrical circuits can be represented using algebra, invented in the mid-19th century. English mathematician George Boole. Since then, Boolean algebra has become the basis for analyzing the logical structure of systems of any level of complexity.

Shannon proved that any noisy communication channel is characterized by a limiting speed of information transmission, called the Shannon limit. At transmission speeds above this limit, errors in the transmitted information are inevitable. However, using appropriate information encoding methods, it is possible to obtain an arbitrarily small error probability for any noisy channel. His research formed the basis for the development of information transmission systems over communication lines.

In 1946, the brilliant American mathematician of Hungarian origin, Heinrich von Neumann, formulated the basic concept of storing computer instructions in its own internal memory, which served as a huge impetus to the development of electronic computing technology.

During World War II, he served as a consultant at the Los Alamos Atomic Center, where he worked on calculations for the explosive detonation of a nuclear bomb and participated in the development of the hydrogen bomb.

Neumann owns works related to the logical organization of computers, problems of the functioning of computer memory, self-reproducing systems, etc. He took part in the creation of the first electronic computer ENIAC, the computer architecture he proposed was the basis for all subsequent models and is still called that - "von Neumann"

I generation of computers. In 1946, work was completed in the USA to create ENIAC, the first computer using electronic components (Fig. 1.7).

Rice. 1.7. First computerENIAC

The new machine had impressive parameters: it used 18 thousand electronic tubes, it occupied a room with an area of ​​300 m 2, had a mass of 30 tons, and energy consumption was 150 kW. The machine operated at a clock frequency of 100 kHz and performed an addition operation in 0.2 ms and a multiplication in 2.8 ms, which was three orders of magnitude faster than relay machines could do. The shortcomings of the new car were quickly revealed. In its structure, the ENIAC computer resembled mechanical computers: the decimal system was used; the program was typed manually on 40 typesetting fields; It took weeks to reconfigure the switching fields. During trial operation, it turned out that the reliability of this machine is very low: troubleshooting took up to several days. Punched tapes and punched cards, magnetic tapes and printing devices were used to input and output data. The first generation computers implemented the concept of a stored program. First generation computers were used for weather forecasting, solving energy problems, military problems and in other important areas.

II generation of computers. One of the most important advances that led to the revolution in computer design and ultimately the creation of personal computers was the invention of the transistor in 1948. The transistor, which is a solid-state electronic switching element (gate), takes up much less space and consumes much less power , doing the same job as a lamp. Computing systems built on transistors were much more compact, more economical and much more efficient than tube ones. The transition to transistors marked the beginning of miniaturization, which made possible the emergence of modern personal computers (as well as other radio devices - radios, tape recorders, televisions, etc.). For generation II machines, the task of automating programming arose, as the gap between the time for developing programs and the calculation time itself increased. The second stage in the development of computer technology in the late 50s - early 60s of the XX century. characterized by the creation of developed programming languages ​​(Algol, Fortran, Cobol) and the mastery of the process of automating the management of the flow of tasks using the computer itself, i.e. development of operating systems.

The computer they created worked a thousand times faster than the Mark 1. But it turned out that most of the time this computer was idle, because to set the calculation method (program) in this computer it was necessary to connect the wires in the required way for several hours or even several days. And the calculation itself could then take only a few minutes or even seconds.

To simplify and speed up the process of setting programs, Mauchly and Eckert began to design a new computer that could store the program in its memory. In 1945, the famous mathematician John von Neumann was brought in to work and prepared a report on this computer. The report was sent to many scientists and became widely known because in it von Neumann clearly and simply formulated the general principles of the functioning of computers, that is, universal computing devices. And to this day, the vast majority of computers are made in accordance with the principles that John von Neumann outlined in his report in 1945. The first computer to embody von Neumann's principles was built in 1949 by the English researcher Maurice Wilkes.

The development of the first electronic serial machine UNIVAC (Universal Automatic Computer) began around 1947 by Eckert and Mauchli, who founded the ECKERT-MAUCHLI company in December of the same year. The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The synchronous, sequential computer UNIVAC-1 was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. The internal storage device with a capacity of 1000 12-bit decimal numbers was implemented on 100 mercury delay lines.

Soon after the UNIVAC-1 machine was put into operation, its developers came up with the idea of ​​automatic programming. It boiled down to ensuring that the machine itself could prepare the sequence of commands needed to solve a given problem.

A strong limiting factor in the work of computer designers in the early 1950s was the lack of high-speed memory. According to one of the pioneers of computing, D. Eckert, “the architecture of a machine is determined by memory.” The researchers focused their efforts on the memory properties of ferrite rings strung on wire matrices.

In 1951, J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes 32 x 32 x 17 with cores that provided storage of 2048 words for 16-bit binary numbers with one parity bit.

Soon, IBM became involved in the development of electronic computers. In 1952, it released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 germanium diodes. An improved version of the IBM 704 machine was distinguished by its high speed, it used index registers and represented data in floating point form.

IBM 704
After the IBM 704 computer, the IBM 709 was released, which, in architectural terms, was close to the machines of the second and third generations. In this machine, indirect addressing was used for the first time and I/O channels appeared for the first time.

In 1956, IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices (SD), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in IBM 305 and RAMAC machines. The latter had a package consisting of 50 magnetically coated metal disks that rotated at a speed of 12,000 rpm. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

Following the first production computer UNIVAC-1, Remington-Rand in 1952 released the UNIVAC-1103 computer, which worked 50 times faster. Later, software interrupts were used for the first time in the UNIVAC-1103 computer.

Rernington-Rand employees used an algebraic form of writing algorithms called “Short Code” (the first interpreter, created in 1949 by John Mauchly). In addition, it is necessary to note the US Navy officer and leader of the programming team, then captain (later the only female admiral in the Navy) Grace Hopper, who developed the first compiler program. By the way, the term “compiler” was first introduced by G. Hopper in 1951. This compiling program translated into machine language the entire program, written in an algebraic form convenient for processing. G. Hopper is also the author of the term “bug” as applied to computers. Once, a beetle (in English - bug) flew into the laboratory through an open window, which, sitting on the contacts, shorted them, causing a serious malfunction in the operation of the machine. The burnt beetle was glued to the administrative log, where various malfunctions were recorded. This is how the first bug in computers was documented.

IBM took the first steps in the field of programming automation by creating the “Fast Coding System” for the IBM 701 machine in 1953. In the USSR, A. A. Lyapunov proposed one of the first programming languages. In 1957, a group led by D. Backus completed work on the first high-level programming language, which later became popular, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to expanding the scope of computers.

Alexey Andreevich Lyapunov
In Great Britain in July 1951, at a conference at the University of Manchester, M. Wilkes presented a report “The Best Method for Designing an Automatic Machine,” which became a pioneering work on the fundamentals of microprogramming. The method he proposed for designing control devices has found wide application.

M. Wilkes realized his idea of ​​microprogramming in 1957 when creating the EDSAC-2 machine. In 1951, M. Wilkes, together with D. Wheeler and S. Gill, wrote the first programming textbook, “Composing Programs for Electronic Computing Machines.”

In 1956, Ferranti released the Pegasus computer, which for the first time implemented the concept of general purpose registers (GPR). With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had not one, but several accumulator registers at his disposal.

The advent of personal computers

Microprocessors were first used in a variety of specialized devices, such as calculators. But in 1974, several companies announced the creation of a personal computer based on the Intel-8008 microprocessor, that is, a device that performs the same functions as a large computer, but is designed for one user. At the beginning of 1975, the first commercially distributed personal computer, Altair-8800, based on the Intel-8080 microprocessor, appeared. This computer sold for about $500. And although its capabilities were very limited (RAM was only 256 bytes, there was no keyboard and screen), its appearance was greeted with great enthusiasm: several thousand sets of the machine were sold in the first months. Buyers supplied this computer with additional devices: a monitor for displaying information, a keyboard, memory expansion units, etc. Soon these devices began to be produced by other companies. At the end of 1975, Paul Allen and Bill Gates (future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to easily communicate with the computer and easily write programs for it. This also contributed to the rise in popularity of personal computers.

The success of Altair-8800 forced many companies to also start producing personal computers. Personal computers began to be sold fully equipped, with a keyboard and monitor; the demand for them amounted to tens and then hundreds of thousands of units per year. Several magazines dedicated to personal computers appeared. The growth in sales was greatly facilitated by numerous useful programs of practical importance. Commercially distributed programs also appeared, for example the text editing program WordStar and the spreadsheet processor VisiCalc (1978 and 1979, respectively). These and many other programs made the purchase of personal computers very profitable for business: with their help, it became possible to perform accounting calculations, draw up documents, etc. Using large computers for these purposes was too expensive.

In the late 1970s, the spread of personal computers even led to a slight decline in demand for large computers and minicomputers (minicomputers). This became a matter of serious concern for IBM, the leading company in the production of large computers, and in 1979 IBM decided to try its hand at the personal computer market. However, the company's management underestimated the future importance of this market and viewed the creation of a personal computer as just a minor experiment - something like one of dozens of works carried out at the company to create new equipment. In order not to spend too much money on this experiment, the company's management gave the unit responsible for this project freedom unprecedented in the company. In particular, he was allowed not to design a personal computer from scratch, but to use blocks made by other companies. And this unit took full advantage of the given chance.

The then latest 16-bit microprocessor Intel-8088 was chosen as the main microprocessor of the computer. Its use made it possible to significantly increase the potential capabilities of the computer, since the new microprocessor allowed working with 1 megabyte of memory, and all computers available at that time were limited to 64 kilobytes.

In August 1981, a new computer called the IBM PC was officially introduced to the public, and soon after it gained great popularity among users. A couple of years later, the IBM PC took a leading position in the market, displacing 8-bit computer models.

IBM PC
The secret of the popularity of the IBM PC is that IBM did not make its computer a single one-piece device and did not protect its design with patents. Instead, she assembled the computer from independently manufactured parts and did not keep the specifications of those parts and how they were connected a secret. In contrast, the design principles of the IBM PC were available to everyone. This approach, called the open architecture principle, made the IBM PC a stunning success, although it prevented IBM from sharing the benefits of its success. Here's how the openness of the IBM PC architecture influenced the development of personal computers.

The promise and popularity of the IBM PC made the production of various components and additional devices for the IBM PC very attractive. Competition between manufacturers has led to cheaper components and devices. Very soon, many companies ceased to be content with the role of manufacturers of components for the IBM PC and began to assemble their own computers compatible with the IBM PC. Since these companies did not need to bear IBM's huge costs for research and maintaining the structure of a huge company, they were able to sell their computers much cheaper (sometimes 2-3 times) than similar IBM computers.

Computers compatible with the IBM PC were initially contemptuously called “clones,” but this nickname did not catch on, as many manufacturers of IBM PC-compatible computers began to implement technical advances faster than IBM itself. Users were able to independently upgrade their computers and equip them with additional devices from hundreds of different manufacturers.

Personal computers of the future

The basis of computers of the future will not be silicon transistors, where information is transmitted by electrons, but optical systems. The information carrier will be photons, since they are lighter and faster than electrons. As a result, the computer will become cheaper and more compact. But the most important thing is that optoelectronic computing is much faster than what is used today, so the computer will be much more powerful.

The PC will be small in size and have the power of modern supercomputers. The PC will become a repository of information covering all aspects of our daily lives, it will not be tied to electrical networks. This PC will be protected from thieves thanks to a biometric scanner that will recognize its owner by fingerprint.

The main way to communicate with the computer will be voice. The desktop computer will turn into a “candy bar”, or rather, into a giant computer screen - an interactive photonic display. There is no need for a keyboard, since all actions can be performed with the touch of a finger. But for those who prefer a keyboard, a virtual keyboard can be created on the screen at any time and removed when it is no longer needed.

The computer will become the operating system of the house, and the house will begin to respond to the owner’s needs, will know his preferences (make coffee at 7 o’clock, play his favorite music, record the desired TV show, adjust temperature and humidity, etc.)

Screen size will not play any role in the computers of the future. It can be as big as your desktop or small. Larger versions of computer screens will be based on photonically excited liquid crystals, which will have much lower power consumption than today's LCD monitors. Colors will be vibrant and images will be accurate (plasma displays possible). In fact, today's concept of "resolution" will be greatly atrophied.

History of the development of computer technology

The development of computing technology can be broken down into the following periods:

Ø Manual(VI century BC - XVII century AD)

Ø Mechanical(XVII century - mid-XX century)

Ø Electronic(mid XX century - present time)

Although Prometheus in Aeschylus’s tragedy states: “Think what I did to mortals: I invented the number for them and taught them how to connect letters,” the concept of number arose long before the advent of writing. People have been learning to count for many centuries, passing on and enriching their experience from generation to generation.

Counting, or more broadly, calculations, can be carried out in various forms: there is oral, written and instrumental counting . Instrumental accounting tools at different times had different capabilities and were called differently.

Manual stage (VI century BC - XVII century AD)

The emergence of counting in ancient times - “This was the beginning of beginnings...”

The estimated age of the last generation of humanity is 3-4 million years. It was so many years ago that a man stood up and picked up an instrument he had made himself. However, the ability to count (that is, the ability to break down the concepts of “more” and “less” into a specific number of units) developed in humans much later, namely 40-50 thousand years ago (Late Paleolithic). This stage corresponds to the emergence of modern man (Cro-Magnon). Thus, one of the main (if not the main) characteristic that distinguishes the Cro-Magnon man from the more ancient stage of man is the presence of counting abilities.

It is not difficult to guess that the first Man's counting device was his fingers.

The fingers turned out greatcomputer. With their help it was possible to count up to 5, and if you take two hands, then up to 10. And in countries where people walked barefoot, on their fingers it was easy to count to 20. Then this was practically enough for most people's needs.

The fingers turned out to be so closely connected with counting, that in ancient Greek the concept of “counting” was expressed by the word"quintuple". And in Russian the word “five” resembles “pastcarpus” - part hands (the word “metacarpus” is rarely mentioned now, but its derivative is "wrist" - often used even now). The hand, metacarpus, is a synonym and in fact the basis of the numeral “FIVE” among many peoples. For example, the Malay "LIMA" means both "hand" and "five".

However, there are known peoples whose units of counting are It was not the fingers, but their joints.

Learning to count on fingers toten, people took the next step forward and began to count in tens. And if some Papuan tribes could only count to six, others could count up to several tens. Just for this it was necessary invite many counters at once.

In many languages, the words “two” and “ten” are consonant. Perhaps this is explained by the fact that once the word "ten" meant "two hands." And now there are tribes that say"two hands" instead of "ten" and "arms and legs" instead of "twenty". And in England The first ten numbers are called by a common name - “fingers”. This means that the British once counted on their fingers.

Finger counting has been preserved in some places to this day, for example, the historian of mathematics L. Karpinsky in his book “The History of Arithmetic” reports that at the world's largest grain exchange in Chicago, offers and requests, as well as prices, are announced by brokers on their fingers without a single word.

Then counting with moving stones appeared, counting with the help of rosaries... This was a significant breakthrough in human counting abilities - the beginning of abstracting numbers.