Encyclopedia of processor terms. What is a Central Processing Unit? The computer processor is designed to

You are reading these lines from your smartphone, tablet or computer. Any of these devices are microprocessor based. The microprocessor is the heart of any computing device. There are many types of microprocessors, but they all perform the same tasks. Today we will talk about how the processor works and what tasks it performs. At first glance, all this seems obvious. But many users would be interested in deepening their knowledge of the most important component that ensures the operation of a computer. We will learn how technology based on simple digital logic allows your computer to not only solve math problems, but also be an entertainment center. How are just two digits - one and zero - transformed into colorful games and movies? Many have asked themselves this question many times and will be happy to receive an answer. Indeed, even at the heart of our recent AMD Jaguar processor, on which the latest game consoles are based, the same ancient logic lies.

In the English language literature, a microprocessor is often referred to as a CPU (central processing unit, [single] central processing unit). The reason for this name lies in the fact that a modern processor is a single chip. The first microprocessor in the history of mankind was created by the corporation back in 1971.

Intel's role in the history of the microprocessor industry


We are talking about the Intel 4004 model. It was not powerful and only knew how to perform addition and subtraction. At the same time, he could process only four bits of information (that is, it was 4-bit). But for its time, its appearance was a significant event. After all, the entire processor fits into one chip. Before the advent of the Intel 4004, computers were based on a whole set of chips or discrete components (transistors). The 4004 microprocessor formed the basis of one of the first portable calculators.

The first microprocessor for home computers was the Intel 8080, introduced in 1974. All the computing power of an 8-bit computer was contained in a single chip. But the really important thing was the announcement of the Intel 8088 processor. It appeared in 1979 and since 1981 began to be used in the first mass personal computers, the IBM PC.

Further, the processors began to develop and gain power. Anyone who is at least a little familiar with the history of the microprocessor industry will remember that 8088 was replaced by 80286. Then came 80386, followed by 80486. Then there were several generations of Pentiums: Pentium, Pentium II, III and Pentium 4. All this Intel processors based on the 8088 base design. They were backward compatible. This means that the Pentium 4 could process any piece of code for the 8088, but it did it at a speed that increased about five thousand times. Not so many years have passed since then, but several more generations of microprocessors have changed.


Since 2004, Intel has started offering multi-core processors. The number of transistors used in them has increased by millions. But even now, the processor obeys the general rules that were created for early chips. The table shows the history of Intel microprocessors up to and including 2004. We will make some explanations of what the indicators reflected in it mean:
  • Name. Processor model
  • Date The year in which the processor was first introduced. Many processors were introduced multiple times, each time their clock frequency was increased. Thus, the next modification of the chip could be re-announced even several years after its first version appeared on the market.
  • Transistors (Number of transistors). The number of transistors in the chip. You can see that this figure has been steadily increasing
  • Microns (Width in microns). One micron is equal to one millionth of a meter. This value is determined by the thickness of the thinnest wire in the chip. For comparison, a human hair is 100 microns thick.
  • Clock speed. Maximum processor speed
  • Data Width. "Bitness" of the arithmetic-logical unit of the processor (ALU, ALU). An 8-bit ALU can add, subtract, multiply, and do other things on two 8-bit numbers. 32-bit ALU can work with 32-bit numbers. To add two 32-bit numbers, an eight-bit ALU needs to execute four instructions. A 32-bit ALU will handle this task in one instruction. In many (but not all) cases, the width of the external data bus coincides with the "bitness" of the ALU. The 8088 had a 16-bit ALU, but an 8-bit bus. For late Pentiums, a situation was typical when the bus was already 64-bit, and the ALU was still 32-bit
  • MIPS (Millions of instructions per second). Allows you to roughly estimate the performance of the processor. Modern microprocessors perform so many different tasks that this indicator has lost its original value and can be used mainly to compare the processing power of several processors (as in this table)

There is a direct relationship between the clock speed, as well as the number of transistors and the number of operations performed by the processor per second. For example, the clock speed of the 8088 processor reached 5 MHz, and the performance: only 0.33 million operations per second. That is, it took about 15 processor cycles to execute one instruction. In 2004, processors were already able to execute two instructions per clock cycle. This improvement was brought about by increasing the number of processors in the chip.

A chip is also called an integrated circuit (or just a microcircuit). Most often it is a small and thin silicon plate, into which transistors are "imprinted". A chip whose side reaches two and a half centimeters can contain tens of millions of transistors. The simplest processors can be squares with sides as small as a few millimeters. And this size is enough for several thousand transistors.

Microprocessor logic


To understand how a microprocessor works, one should study the logic on which it is based, as well as become familiar with assembly language. This is the native language of the microprocessor.

A microprocessor is capable of executing a specific set of machine instructions (instructions). Operating with these commands, the processor performs three main tasks:

  • Using its arithmetic logic device, the processor performs mathematical operations: addition, subtraction, multiplication and division. Modern microprocessors fully support floating point operations (using a dedicated floating point arithmetic processor)
  • Microprocessor is able to move data from one type of memory to another
  • The microprocessor has the ability to make a decision and, on the basis of its decision, "jump", that is, switch to the execution of a new set of instructions

The microprocessor contains:

  • Address bus. This bus can be 8, 16, or 32 bits wide. She deals with sending the address into memory
  • Data bus: 8, 16, 32 or 64 bits wide. This bus can send data to memory or receive data from memory. When talking about the "bitness" of the processor, we are talking about the width of the data bus
  • RD (read) and WR (write) channels, providing interaction with memory
  • Clock line providing processor clock cycles
  • Reset line, which resets the instruction counter and restarts instruction execution

Since the information is quite complex, we will proceed from the fact that the width of both buses - the address and data buses - is only 8 bits. And let's take a quick look at the components of this relatively simple microprocessor:

  • Registers A, B and C are logic chips used for intermediate data storage
  • Address latch is like registers A, B and C
  • The command counter is a logical microcircuit (latch) capable of incrementing the value by one in one step (if it receives the corresponding command) and zeroing the value (provided that the corresponding command is received)
  • ALU (Arithmetic Logic Unit) can perform addition, subtraction, multiplication and division between 8-bit numbers or act as a normal adder
  • The test register is a special latch that stores the results of the comparisons performed by the ALU. Usually ALU compares two numbers and determines if they are equal or if one of them is greater than the other. The test register can also store the carry bit of the last adder action. It stores these values \u200b\u200bin a trigger circuit. In the future, these values \u200b\u200bcan be used by the command decoder to make decisions
  • The six blocks in the diagram are labeled “3-State”. These are sort buffers. Multiple output sources can be wired, but the sort buffer only allows one of them (at a time) to pass a value: "0" or "1". Thus, the sort buffer can pass values \u200b\u200bor overlap the output source's ability to transfer data.
  • The instruction register and instruction decoder keep all of the above components under control.

This diagram does not show the control lines of the command decoder, which can be expressed in the form of the following "orders":

  • "Register A to accept the value currently coming from the data bus"
  • "Register B to accept the value currently coming from the data bus"
  • "Register C to accept the value currently coming from the arithmetic logic unit"
  • "Register the command counter to accept the value currently coming from the data bus"
  • "To the address register, take the value currently coming from the data bus"
  • "The command register to take the value currently coming from the data bus"
  • "Increase the value of the command counter [by one]"
  • "Command counter to reset to zero"
  • "Activate one of six sort buffers" (six separate control lines)
  • "Tell the arithmetic-logic device what operation to perform"
  • "For the test register, accept test bits from the ALU"
  • "Activate RD (read channel)"
  • "Activate WR (record channel)"

The command decoder receives data bits from the test register, the synchronization channel, and also from the command register. If we simplify the description of the instructions decoder tasks as much as possible, then we can say that it is this module that "prompts" the processor what needs to be done at the moment.

Microprocessor memory


Familiarity with computer memory and its hierarchy will help you better understand the contents of this section.

Above we wrote about buses (address and data), as well as read (RD) and write (WR) channels. These buses and channels are connected to memory: random access memory (RAM, RAM) and read-only memory (ROM, ROM). In our example, we consider a microprocessor whose bus width is 8 bits. This means that it is capable of addressing 256 bytes (two to the eighth power). At one point in time, it can read or write 8 bits of data from memory. Suppose this simple microprocessor has 128 bytes of ROM (starting at address 0) or 128 bytes of RAM (starting at address 128).

A read-only memory module contains a predefined, fixed set of bytes. The address bus asks the ROM for a specific byte to be transferred to the data bus. When the read (RD) channel changes its state, the ROM module provides the requested byte to the data bus. That is, in this case, only data reading is possible.

The processor can not only read information from the RAM, it can also write data to it. Depending on whether it is reading or writing, the signal arrives either through the read channel (RD) or through the write channel (WR). Unfortunately, RAM is volatile. When the power is turned off, it loses all data stored in it. For this reason, the computer needs non-volatile read-only memory.

Moreover, theoretically, a computer can do without RAM at all. Many microcontrollers allow you to place the required data bytes directly into the processor chip. But it's impossible to do without ROM. In personal computers, ROM is called the basic input and output system (BSVV, BIOS, Basic Input / Output System). At startup, the microprocessor begins its work by executing the commands found in the BIOS.

BIOS commands test the hardware of the computer, and then they access the hard drive and select the boot sector. This boot sector is a separate small program that the BIOS first reads from disk and then places in RAM. After that, the microprocessor begins to execute commands located in the RAM boot sector. The boot sector program informs the microprocessor which data (intended for subsequent execution by the processor) should be additionally moved from the hard disk to the RAM. This is how the processor loads the operating system.

Microprocessor instructions


Even the simplest microprocessor can handle a fairly large set of instructions. A set of instructions is a kind of template. Each of these instructions loaded into the instruction register has a different meaning. It is difficult for humans to memorize a sequence of bits, so each instruction is described as a short word, each of which represents a specific instruction. These words make up the processor's assembly language. The assembler translates these words into a processor-friendly binary language.

Here is a list of assembly words for a conditional simple processor, which we are considering as an example for our story:

  • LOADA mem - Load register A from some memory address
  • LOADB mem - Load register B from some memory address
  • CONB con - Load constant value into register B
  • SAVEB mem - Save (save) the value of register B in memory at a specific address
  • SAVEC mem - Save (save) the value of register C in memory at a specific address
  • ADD - Add (add) the values \u200b\u200bof registers A and B. Save the result of the action in register C
  • SUB - Subtract (subtract) the value of register B from the value of register A. Save the result of the action in register C
  • MUL - Multiply (multiply) the values \u200b\u200bof the registers A and B. Save the result of the action in register C
  • DIV - Divide the value of register A by the value of register B. Save the result in register C
  • COM - Compare (compare) the values \u200b\u200bof the registers A and B. Pass the result to the test register
  • JUMP addr - Jump to the specified address
  • JEQ addr - If the condition of equality of the values \u200b\u200bof two registers is met, jump to the specified address
  • JNEQ addr - If the condition of equality of the values \u200b\u200bof the two registers is not met, jump to the specified address
  • JG addr - If the value is greater, jump to the specified address
  • JGE addr - If the value is greater than or equal, jump to the specified address
  • JL addr - If the value is less, jump to the specified address
  • JLE addr - If value is less or equal, jump to the specified address
  • STOP - Stop (stop) execution

English words denoting the performed actions in brackets are given for a reason. So we can see that the assembly language (like many other programming languages) is based on English, that is, on the familiar means of communication of those people who created digital technologies.

Microprocessor operation on the example of calculating the factorial


Let's consider the work of a microprocessor using a specific example of its execution of a simple program that calculates the factorial of the number "5". First, let's solve this problem "in a notebook":

factorial of 5 \u003d 5! \u003d 5 * 4 * 3 * 2 * 1 \u003d 120

In the C programming language, this piece of code performing this calculation will look like this:

A \u003d 1; f \u003d 1; while (a

When this program finishes its work, the variable f will contain a factorial value of five.

The C compiler translates (i.e. translates) this code into a set of assembly language instructions. In the processor we are considering, the RAM starts at address 128, and the read-only memory (which contains the assembly language) starts at address 0. Therefore, in the language of this processor, this program will look like this:

// Suppose a is at address 128 // Suppose F at address 1290 CONB 1 // a \u003d 1; 1 SAVEB 1282 CONB 1 // f \u003d 1; 3 SAVEB 1294 LOADA 128 // if a\u003e 5 the jump to 175 CONB 56 COM7 JG 178 LOADA 129 // f \u003d f * a; 9 LOADB 12810 MUL11 SAVEC 12912 LOADA 128 // a \u003d a + 1; 13 CONB 114 ADD15 SAVEC 12816 JUMP 4 // loop back to if17 STOP

Now the next question arises: how do all these commands look in permanent memory? Each of these instructions must be represented as a binary number. To make it easier to understand the material, let's assume that each of the assembly language instructions of the processor we are considering has a unique number:

  • LOADA - 1
  • LOADB - 2
  • CONB - 3
  • SAVEB - 4
  • SAVEC mem - 5
  • ADD - 6
  • SUB - 7
  • MUL - 8
  • DIV - 9
  • COM - 10
  • JUMP addr - 11
  • JEQ addr - 12
  • JNEQ addr - 13
  • JG addr - 14
  • JGE addr - 15
  • JL addr - 16
  • JLE addr - 17
  • STOP - 18

// Assume a at address 128 // Assume F at address 129Addr machine instruction / value 0 3 // CONB 11 12 4 // SAVEB 1283 1284 3 // CONB 15 16 4 // SAVEB 1297 1298 1 // LOADA 1289 12810 3 // CONB 511 512 10 // COM13 14 // JG 1714 3115 1 // LOADA 12916 12917 2 // LOADB 12818 12819 8 // MUL20 5 // SAVEC 12921 12922 1 // LOADA 12823 12824 3 // CONB 125 126 6 // ADD27 5 // SAVEC 12828 12829 11 // JUMP 430 831 18 // STOP

As you can see, seven lines of C code have been converted to 18 lines of assembly language. They took 32 bytes in ROM.

Decoding


We will have to start the conversation about decoding by considering philological issues. Alas, not all computer terms have unambiguous matches in Russian. The translation of terminology was often spontaneous, and therefore one and the same English term can be translated into Russian in several ways. This is what happened with the most important component of the microprocessor logic "instruction decoder". Computer experts call it both an instruction decoder and an instruction decoder. None of these variants of the name can be called neither more nor less "correct" than the other.

An instruction decoder is needed to translate each machine code into a set of signals that drive the various components of the microprocessor. If we simplify the essence of his actions, then we can say that it is he who will coordinate “software” and “hardware”.

Let's consider the operation of the command decoder using the example of the ADD instruction performing the addition action:

  • During the first clock cycle of the processor, the instruction is loaded. At this stage, the command decoder needs to: activate the sort buffer for the command counter; activate the read channel (RD); activate the sort buffer latch to skip input data to the command register
  • During the second processor clock cycle, the ADD instruction is decoded. At this point, the arithmetic logic unit performs the addition and transfers the value to register C
  • During the third cycle of the processor clock frequency, the command counter increases its value by one (theoretically, this action intersects with what happened during the second cycle)

Each command can be represented as a set of sequentially performed operations that manipulate the components of the microprocessor in a specific order. That is, program instructions lead to quite physical changes: for example, a change in the position of the latch. Some instructions may require two or three processor clock cycles to complete. Others may even need five or six cycles.

Microprocessors: performance and trends


The number of transistors in a processor is an important factor in its performance. As shown earlier, the 8088 required 15 clock cycles to execute one instruction. And to perform one 16-bit operation, it took about 80 cycles. This is how the ALU multiplier of this processor was arranged. The more transistors and the more powerful the ALU multiplier, the more the processor manages to do in one cycle.

Many transistors support pipelining technology. Within the framework of the pipeline architecture, there is a partial overlap of executable instructions on top of each other. An instruction may require the same five cycles to execute, but if the processor simultaneously processes five instructions (at different stages of completion), then, on average, one instruction will take one cycle of the processor's clock frequency to execute.

Many modern processors have more than one instruction decoder. And each of them supports pipelining. This allows more than one instruction to be executed per processor cycle. This technology requires an incredible array of transistors.

64-bit processors


Although 64-bit processors became widespread only a few years ago, they have been around for a relatively long time: since 1992. Both Intel and AMD currently offer such processors. A 64-bit processor can be considered a processor that has a 64-bit arithmetic-logic unit (ALU), 64-bit registers and 64-bit buses.

The main reason why processors need 64-bit is because this architecture expands the address space. 32-bit processors can only access two or four gigabytes of RAM. Once upon a time these numbers seemed gigantic, but the years have passed and today you will not surprise anyone with such a memory. A few years ago, the memory of a conventional computer was 256 or 512 megabytes. In those days, the 4GB limit only hindered servers and machines running large databases.

But very quickly it turned out that even ordinary users sometimes lack either two or even four gigabytes of RAM. This annoying limitation does not apply to 64-bit processors. The address space available to them these days seems infinite: two to the sixty-fourth power of bytes, that is, something like a billion gigabytes. In the foreseeable future, such a gigantic RAM is not expected.

The 64-bit address bus, as well as the wide and high-speed data buses of the respective motherboards, allow 64-bit computers to increase the speed of input and output data when interacting with devices such as hard disk and video card. These new capabilities significantly increase the performance of modern computing machines.

But not all users will experience the advantages of the 64-bit architecture. It is necessary, first of all, for those who are involved in video and photo editing, as well as work with various large pictures. 64-bit computers are highly appreciated by connoisseurs of computer games. But those users who simply communicate on social networks with the help of a computer and surf the web and edit text files will most likely simply not feel any advantages of these processors.

Based on materials from computer.howstuffworks.com

Nowadays, processors play a special role only in advertising, they are trying with all their might to convince that it is the processor in the computer that is the decisive component, especially such a manufacturer as Intel. The question arises: what is a modern processor, and in general, what is a processor?

For a long time, or to be more precise, until the 90s, it was the processor that determined the performance of a computer. The processor determined everything, but today it is not quite so.

Not everything is determined by the central processing unit, and processors from Intel are not always preferred over those from AMD. Recently, the role of other computer components has increased markedly, and at home processors rarely become the bottleneck, but just like other computer components, they need additional consideration, since no computer cannot exist without it. The processors themselves have not been the lot of several types of computers for a long time, since the variety of computers has grown.

Processor (central processing unit) is a very complex microcircuit that processes machine code, which is responsible for performing various operations and controlling computer peripherals.

For short designation of the central processor, the abbreviation is adopted - CPU, and also very common CPU - Central Processing Unit, which translates as central processing unit.

Using microprocessors

A device such as a processor is integrated into almost any electronic equipment, what can we say about such devices as a TV and video player, even in toys, and smartphones in themselves are already computers, albeit with different design.

Several CPU cores can perform completely different tasks independently of each other. If the computer performs only one task, then its execution is also accelerated due to the parallelization of typical operations. Performance can take on a pretty clear line.

Internal frequency multiplier factor

Signals can circulate inside the processor crystal at a high frequency, although the processors cannot yet handle the external components of the computer at the same frequency. In this regard, the frequency at which the motherboard operates is one, and the frequency of the processor is different, is higher.

The frequency that the processor receives from the motherboard can be called the reference, and it, in turn, multiplies it by an internal coefficient, which results in an internal frequency, called an internal multiplier.

The capabilities of the internal frequency multiplier are very often used by overlockers to release the overclocking potential of the processor.

CPU cache

The processor receives data for subsequent work from the RAM, but inside the processor's microcircuits, the signals are processed at a very high frequency, and the calls to the RAM modules themselves are several times less frequent.

A high coefficient of the internal frequency multiplier becomes more effective when all information is inside it, in comparison, for example, than in RAM, that is, from the outside.

The processor has few cells for processing data, called registers, in which it usually stores almost nothing, and to speed up the work of the processor and with it the computer system, caching technology was integrated.

A small set of memory cells can be called a cache, which in turn act as a buffer. When a read from shared memory occurs, a copy appears in the CPU cache. This is necessary so that, if you need the same data, access to them is right at hand, that is, in a buffer, which increases performance.

The cache memory in current processors is pyramidal:

  1. Level 1 cache is the smallest in size, but at the same time the fastest in speed, it is part of the processor crystal. Manufactured using the same technologies as the processor registers, it is very expensive, but it is worth its speed and reliability. Although it is measured in hundreds of kilobytes, which is very small, it plays a huge role in performance.
  2. Cache memory of the 2nd level - just like the 1st level, is located on the processor crystal and operates at the frequency of its core. In modern processors, it is measured from hundreds of kilobytes to several megabytes.
  3. Level 3 cache is slower than previous levels of this type of memory, but it is faster than RAM, which is important, and is measured in tens of megabytes.

L1 and L2 cache sizes affect both performance and processor cost. The third level of cache memory is a kind of bonus in the work of a computer, but not one of the microprocessor manufacturers is in no hurry to neglect it. The 4th level cache memory exists and justifies itself in multiprocessor systems, which is why it will not be possible to find it on an ordinary computer.

Processor installation socket (Soket)

Understanding that modern technologies are not so advanced that the processor will be able to receive information over a distance, it should not be variable, it should be attached, attached to the motherboard, installed in it and interact with it. This mount is called Soket and is only suitable for a specific type or family of processors, which are also different from different manufacturers.

What is a processor: architecture and technological process

The architecture of the processor is its internal structure, the different arrangement of the elements also determines its characteristics. The architecture itself is inherent in a whole family of processors, and changes made and aimed at improving or fixing errors are called stepping.

The technological process determines the size of the components of the processor itself and is measured in nanometers (nm), and the smaller size of the transistors determines the smaller size of the processor itself, which is what the development of future CPUs is aimed at.

Power consumption and heat dissipation

The power consumption itself directly depends on the technology by which the processors are manufactured. Smaller sizes and higher frequencies are directly proportional to power consumption and heat dissipation.

To reduce power consumption and heat dissipation, there is an energy-saving automatic system for adjusting the load on the processor, respectively, in the absence of any need for performance. High-performance computers necessarily have a good processor cooling system.

Summing up the material of the article - the answer to the question of what a processor is:

Processors of our days have the ability to multichannel work with RAM, new instructions appear, in turn, thanks to which its functional level increases. The ability to process graphics by the processor itself provides a reduction in the cost, both for the processors themselves, and thanks to them for office and home computer assemblies. Virtual cores appear for a more practical distribution of performance, technologies are developing, and with them the computer and such a component of it as the central processor.

Modern processors have the shape of a small rectangle, which is presented in the form of a silicon wafer. The plate itself is protected by a special plastic or ceramic housing. All basic circuits are protected, thanks to them, the full operation of the CPU is carried out. If everything is extremely simple with the appearance, then what about the circuit itself and how the processor works? Let's take a closer look at this.

The CPU contains a small number of different elements. Each of them performs its own action, data and control are transmitted. Ordinary users are used to distinguishing processors by their clock speed, amount of cache memory, and cores. But this is not all that ensures reliable and fast operation. It is worth paying special attention to each component.

Architecture

The internal design of the CPU is often different from each other, each family has its own set of properties and functions - this is called its architecture. You can see an example of the processor design in the image below.

But many people are used to mean a slightly different meaning by processor architecture. If we consider it from the point of view of programming, then it is determined by its ability to execute a certain set of codes. If you buy a modern CPU, then most likely it belongs to the x86 architecture.

Kernels

The main part of the CPU is called the core, it contains all the necessary blocks, and also performs logical and arithmetic tasks. If you look at the figure below, you can make out what each functional block in the kernel looks like:

  1. Instruction fetch module. Here, instructions are recognized by the address indicated in the command counter. The number of simultaneous reading of commands directly depends on the number of installed decryption blocks, which helps to load each cycle of work with the largest number of instructions.
  2. Transition predictor is responsible for the optimal operation of the instruction fetch unit. It determines the sequence of executable commands, loading the kernel pipeline.
  3. Decoding module. This part of the kernel is responsible for defining some of the processes to perform tasks. The decoding task itself is very difficult due to the variable instruction size. In the newest processors, there are several such blocks in one core.
  4. Data fetch modules. They take information from RAM or cache memory. They carry out exactly the data selection, which is necessary at this moment for the execution of the instruction.
  5. Control block. The name itself speaks already of the importance of this component. In the core, it is the most important element, since it distributes energy between all the blocks, helping to carry out each action on time.
  6. Results storage module. It is intended for writing after the end of instruction processing in RAM. The save address is specified in the running task.
  7. An element of work with interrupts. The CPU is capable of performing several tasks at once thanks to the interrupt function, this allows it to stop the progress of one program by switching to another instruction.
  8. Registers. This is where the temporary results of instructions are stored, this component can be called a small fast RAM. Often its size does not exceed several hundred bytes.
  9. Command counter. It stores the address of the command that will be used on the next processor cycle.

System bus

The system bus CPU connects the devices included in the PC. Only he is directly connected to it, the rest of the elements are connected through a variety of controllers. In the bus itself, there are many signal lines through which information is transmitted. Each line has its own protocol that allows controllers to communicate with the rest of the connected computer components. The bus has its own frequency, respectively, the higher it is, the faster the exchange of information between the connecting elements of the system.

Cache memory

The speed of the CPU depends on its ability to fetch instructions and data from memory as quickly as possible. The cache memory reduces the execution time of operations due to the fact that it acts as a temporary buffer that provides instantaneous transfer of CPU data to RAM, or vice versa.

The main characteristic of cache memory is its difference in levels. If it is high, then the memory is slower and larger. The fastest and smallest memory is the first level. The principle of operation of this element is very simple - the CPU reads data from RAM and stores it in a cache of any level, while deleting the information that has been accessed for a long time. If the processor needs this information again, it will receive it faster thanks to the temporary buffer.

Socket (connector)

Since the processor has its own socket (slot or slot), you can easily replace it in the event of a breakdown or upgrade your computer. Without a socket, the CPU would simply be soldered into the motherboard, making it difficult to repair or replace later. It is worth noting that each socket is designed exclusively for installing certain processors.

Often users inadvertently buy incompatible processor and motherboard, which causes additional problems.

The microprocessor for a personal computer, as well as for other devices, be it phones, tablets, laptops or other interesting gadgets, is the main central device that performs almost all calculations and is responsible for data processing. You could even say this - cPU this "brain" any modern computer or high-tech device. It is also one of the most expensive items in modern computers.

1. The history of the appearance of the processor

The first computer processors based on a mechanical relay appeared in the fifties of the last century. After some time, models with electronic tubes appeared, which were eventually replaced by transistors. The computers themselves were rather large and expensive devices.

The subsequent development of processors boiled down to the fact that it was decided to present the components included in them in one microcircuit. The emergence of integrated semiconductor circuits made it possible to implement this idea.

In 1969, Busicom ordered twelve microcircuits from Intel, which they planned to use in their own design - in a desktop calculator. Already at that time, Intel developers had an idea to replace several microcircuits with one. The idea was approved by the management of the corporation, since such a technology made it possible to significantly reduce the costs of manufacturing microcircuits, while specialists had the opportunity to make the processor universal for use in other computing devices.

Some systems allow you to increase the already existing operating frequency of the processor, this procedure is called "Overclocking"... Setting a higher processor frequency allows you to increase its performance indicators.

7. Comparison of manufacturers Intel and AMD

American company named Intel was founded in 1968, while its main competitor is the company AMD - appeared a year later.

The fact that AMD came to light a year later than Intel had a significant impact on their rivalry. The first processors from AMD were copies of processors from Intel, but this fact did not prevent AMD from developing the first 16-core processor... At the same time, in 2005, an ordinary user was offered first 2-core processorbearing the name AMD Athlon 64 X2.

Intel's dual-core Core 2 Duo processors hit the market a year later, and AMD processors are still much cheaper today than Intel's.

Which processor should you prefer? If the user needs to use a computer to work with complex professional software, then in this case it is better to purchase a PC with an Intel processor.

AMD processors are a great option for gaming PCs and in situations that do not require high performance hardware.

8. Processor cache

Cache - nothing more than processor memory, the tasks of which are similar to the tasks assigned to the RAM. The processor uses the cache to store data in it. In this type of memory, the most frequently used information is buffered, due to which the time spent on subsequent access to it is significantly reduced.

The operating memory of computers sold today ranges from 1 GB, while the processor cache does not exceed 8 MB. As you can see from the above data, the difference in these types of memory is quite significant. Despite this, even the specified volume is sufficient to ensure the normal performance of the entire system. Processors with two-level cache memory: L1 and L2 are of great interest to users today. The memory of the first level is smaller than the memory of the second level and it is needed to store instructions. In this case, the second level, due to the fact that it is larger, is used for direct data storage. Many processors currently have a shared L2 cache.

9. Processor functions and technologies: MMX, SSE, 3DNow !, Hyper Threading

Modern processors are equipped with characteristic additional functions and technologies that expand their capabilities:

3DNow !, ММХ, SSE, SSE2, SSE3 - technologies that optimize the work with large data and multimedia files;

AMD processors are designed with the technology NX-bit (No Execute), while Intel processors have similar technology XD (Execute Disable Bit);

Cool "n" Quiet (in AMD), TM1 / TM2, C1E, EIST (at Intel) electricity consumption is reduced;

In technology AMD64 or EMT64 (for Intel processors) needs 64-bit instructions;

Concurrent execution of multiple threads of instructions in some Intel processors implies the presence of technology NT (Hyper-Threading Technology).

10. Multi-core processors

The center of modern central microprocessors is equipped with cores. The core is a silicon crystal with an area of \u200b\u200babout one square centimeter. Despite its small size, microscopic logic elements made it possible to implement a processor circuit diagram on its surface, the so-called chip architecture.

Multi-core processor consists in the presence in the central microprocessor of two or more computational cores on the surface of one processor chip, which can also be enclosed in one housing.

A list of the advantages of a multi-core processor:

It becomes possible to distribute the work of applications across several cores;

Computationally intensive processes run significantly faster;

Application response speed is increased;

Reducing the consumption of electrical energy;

More productive use of resource-intensive multimedia programs;

More comfortable work for PC users.

11. Processor manufacturing

Microprocessor manufacturing involves at least two important stages. At the first stage, substrates are produced, which are subsequently given conductive properties. In the second stage, the produced substrates are tested, after which the processor is assembled and packaged.

Today, such leading processor manufacturers as AMD and Intel are trying to establish production, using the largest possible market segments, minimizing the possible range of crystals. An excellent confirmation of this is the Intel Core 2 Duo processors. The product line includes three processors with different code names: Merom for mobile devices, Conroe for desktop versions, Woodcrest for server versions. All three processors share the same technological basis, which allows the manufacturer to make a decision, being in the last stage of production. So, for example, if the market requires more mobile processors, the company will focus on the release of the Socket 479 model. If the demand for desktop models increases, Intel will pack the crystals required for Socket 775. In case of an increase in demand for server processors, all the above actions will be applied for Socket 771.

12. Marking and code names of processors

Various products manufactured at factories of large enterprises are designated by code names, which is a rather convenient solution rather than using long official designations when conducting official conversations and correspondence. Sometimes a wide range of users learn about intra-company code names, but they are rarely used in everyday life.

The situation with code names of processors is the opposite, since recently they have begun to be used in conversations and are included in official documentation as marking processors.

At the same time, you only need to remember some code names, for example, for the successful modernization of a PC, since most often, in addition to beautiful sound and advertising ambitions, such names do not carry any useful information for the consumer.

13. Sockets (socket) for processors

Processor socket translated from English means "Connector" or "nest"... If we apply this term to a computer, then the socket is the place where the central processor is installed. Each processor model is equipped with its own version of the connector, this is due to the fact that the manufacturing technologies of processors have improved, and therefore their architecture, the number of transistors, sockets, etc. have been modernized.

The CPU socket is designed as a slot or slot to facilitate the installation of the CPU. The use of connectors greatly simplifies the replacement of the processor for subsequent repair or upgrade of the PC.

14. CPU cooling

Fan or, as it is also called cooler, is a device whose task is to provide cooling for the processor. There are different models of coolers, but most often they are installed on top of the processor itself.

Coolers are active and passive. The category of passive coolers includes ordinary radiators, which are quite cheap, consume a minimum of electricity and, at the same time, are practically silent. An active cooler is a radiator with a fan attached to it.

The most popular today are active air coolers, consisting of a metal radiator with a fan installed on it.

Being a mechanical device, the rubbing parts of the cooler need timely lubrication with machine oil, while it is strictly forbidden to use vegetable oils for these purposes.

You can find out about the need to lubricate the device by the characteristic and gradually increasing noise from the cooler.

15. Malfunctions and errors in processors

In the event of a processor malfunction, the PC may start to shutdown and reboot on its own, the operating system will freeze, and the hard drive will simply not be displayed. Moreover, all of the above is accompanied by strong heating of the processor. Often, a faulty processor becomes the cause of permanent errors in the operation of the operating system and related software.

Under no circumstances should a faulty processor be checked on a working motherboard, since such actions may well provoke the failure of the motherboard.

Most often, processors are damaged due to overheating and incorrect assembly of the computer, which can lead to accidental bending of the processor contacts, and as a result of the occurrence of a short circuit. In this case, the only solution is to replace the processor.

It is very difficult to surprise the modern consumer of electronics. We are already accustomed to the fact that our pocket is legally occupied by a smartphone, a laptop is in the bag, a smart watch is dutifully counting steps on our hand, and headphones with an active noise cancellation system caress our ears.

A funny thing, but we are used to carrying not one, but two, three or more computers at once. After all, this is how you can call a device that has cPU... It doesn't matter at all what a particular device looks like. A miniature chip is responsible for its work, which has overcome a stormy and rapid development path.

Why did we bring up the topic of processors? It's simple. Over the past ten years, there has been a real revolution in the world of mobile devices.

There are only 10 years of difference between these devices. But Nokia N95 then seemed to us a space device, and today we look at ARKit with a certain distrust.

But everything could have turned out differently and the shabby Pentium IV would have remained the ultimate dream of an ordinary buyer.

We've tried to avoid complex technical terms and explain how the processor works and figure out which architecture is the future.

1.How it all began

The first processors were absolutely different from what you can see when you open the lid of your PC's system unit.

Instead of microcircuits in the 40s of the XX century, electromechanical relayssupplemented with vacuum tubes. The lamps played the role of a diode, the state of which could be regulated by lowering or increasing the voltage in the circuit. Such constructions looked like this:

One gigantic computer needed hundreds, sometimes thousands of processors to run. But, at the same time, you could not run on such a computer even a simple editor like NotePad or TextEdit from the standard set of Windows and macOS. A computer would simply not have enough power.

2. The emergence of transistors

The first field-effect transistors appeared in 1928. But the world changed only after the appearance of the so-called bipolar transistors, opened in 1947.

In the late 1940s, experimental physicist Walter Brattain and theorist John Bardeen developed the first point-to-point transistor. In 1950 it was replaced by the first junction transistor, and in 1954 the well-known manufacturer Texas Instruments announced a silicon transistor.

But the real revolution came in 1959 when scientist Jean Henri developed the first silicon planar (flat) transistor, which became the basis for monolithic integrated circuits.

Yes, it's a little tricky, so let's dig a little deeper and deal with the theoretical part.

3. How the transistor works

So, the task of such an electrical component as transistor is to control the current. Simply put, this slightly tricky switch controls the flow of electricity.

The main advantage of a transistor over a conventional switch is that it does not require a human presence. Those. such an element is capable of controlling the current independently. Plus, it works much faster than you would turn on or off the electrical circuit yourself.

From a school computer science course, you probably remember that a computer "understands" human language due to combinations of only two states: "on" and "off". In the understanding of the machine, this state is "0" or "1".

The task of the computer is to represent the electric current in the form of numbers.

And if earlier the task of switching states was performed by clumsy, cumbersome and ineffective electrical relays, now the transistor took over this routine work.

Since the beginning of the 60s, transistors have been made of silicon, which made it possible not only to make processors more compact, but also to significantly increase their reliability.

But first, let's deal with the diode

Silicon (aka Si - "silicium" in the periodic table) belongs to the category of semiconductors, which means, on the one hand, it passes current better than a dielectric, on the other hand, it does it worse than metal.

Whether we like it or not, to understand the operation and further history of the development of processors, we will have to plunge into the structure of one silicon atom. Do not be afraid, let's make it short and very clear.

The task of the transistor is to amplify a weak signal through an additional power source.

The silicon atom has four electrons, thanks to which it forms bonds (or to be precise - covalent bonds) with the same three nearby atoms, forming a crystal lattice. While most of the electrons are in communication, a small part of them is able to move through the crystal lattice. It is because of this partial transition of electrons that silicon was classified as a semiconductor.

But such a weak movement of electrons would not allow the use of the transistor in practice, so the scientists decided to increase the performance of transistors by alloying, or, more simply, the addition of the silicon crystal lattice with atoms of elements with a characteristic arrangement of electrons.

So they began to use a 5-valent phosphorus impurity, due to which they received n-type transistors... The presence of an additional electron made it possible to accelerate their movement, increasing the passage of current.

When doping transistors p-type boron became such a catalyst, which includes three electrons. Due to the absence of one electron, holes appear in the crystal lattice (they play the role of a positive charge), but due to the fact that electrons are able to fill these holes, the conductivity of silicon increases significantly.

Suppose we took a silicon wafer and doped one part of it with a p-type dopant and the other with an n-type dopant. This is how we got diode - the basic element of the transistor.

Now the electrons in the n-part will tend to go to the holes in the p-part. In this case, the n-side will have a slight negative charge, and the p-side will have positive charges. The electric field formed as a result of this "gravitation" - a barrier, will prevent the further movement of electrons.

If the power supply is connected to the diode in such a way that "-" touches the p-side of the plate, and "+" - the n-side, current flow will be impossible due to the fact that the holes will be attracted to the negative contact of the power supply, and electrons - to positive, and the bond between electrons p and n side will be lost due to the expansion of the combined layer.

But if you connect the power supply with sufficient voltage the other way around, i.e. "+" from the source to the p-side, and "-" - to the n-side, electrons placed on the n-side will be repelled by the negative pole and pushed out to the p-side, occupying holes in the p-region.

But now electrons are attracted to the positive pole of the power source and they continue to move along the p-holes. This phenomenon was named forward biased diode.

Diode + diode \u003d transistor

By itself, the transistor can be represented as two diodes docked to each other. In this case, the p-region (the one where the holes are located) becomes common for them and is called the "base".

An N-P-N transistor has two n-regions with additional electrons - they are also the "emitter" and "collector" and one weak region with holes - the p-region, called the "base".

If we connect a power supply (let's call it V1) to the n-regions of the transistor (regardless of the pole), one diode will be reverse biased and the transistor will be closed.

But, as soon as we connect another power source (let's call it V2), setting the "+" contact to the "central" p-region (base), and the "-" contact to the n-region (emitter), some of the electrons will flow through again formed chain (V2), and a part will be attracted by the positive n-region. As a result, electrons will flow into the collector area, and the weak electric current will be amplified.

Exhale!

4. So how does a computer work?

And now the most important thing.

Depending on the applied voltage, the transistor can be either openor closed... If the voltage is insufficient to overcome the potential barrier (the same one at the junction of p and n plates) - the transistor will be in the closed state - in the "off" state or, in the language of the binary system, "0".

With enough voltage, the transistor turns on, and we get the value "on" or "1" in the binary system.

This state, 0 or 1, is called a "bit" in the computer industry.

Those. we get the main property of the very switch that opened the way for humanity to computers!

In the first electronic digital computer ENIAC, or, more simply, the first computer, about 18 thousand triode tubes were used. The computer was about the size of a tennis court and weighed 30 tons.

There are two more key points to understand to understand how a processor works.

Moment 1... So, we have decided on what is bit... But with its help we can only get two characteristics of something: either "yes" or "no". In order for the computer to learn to understand us better, they came up with a combination of 8 bits (0 or 1), which was nicknamed byte.

Using a byte, you can encode a number from zero to 255. Using these 255 numbers - combinations of zeros and ones, you can encode anything.

Moment 2. Having numbers and letters without any logic would give us nothing. That is why the concept appeared logical operators.

By connecting only two transistors in a certain way, you can achieve the execution of several logical actions at once: "and", "or". The combination of the voltage across each transistor and the type of their connection allows you to get different combinations of zeros and ones.

Through the efforts of programmers, the values \u200b\u200bof zeros and ones, the binary system, began to be converted into decimal so that we could understand what exactly the computer "says". And to enter commands, our usual actions, such as entering letters from the keyboard, are represented as a binary chain of commands.

Simply put, imagine that there is a correspondence table, say, ASCII, in which each letter corresponds to a combination of 0 and 1. You pressed a button on the keyboard, and at that moment on the processor, thanks to the program, the transistors switched in such a way that the screen appeared the very letter written on the key.

This is a rather primitive explanation of how a processor and a computer work, but it is this understanding that allows us to move on.

5. And the transistor race began

After the British radio engineer Jeffrey Dahmer proposed placing the simplest electronic components in a monolithic semiconductor crystal in 1952, the computer industry took a leap forward.

From the integrated circuits proposed by Dahmer, engineers quickly moved to microchips, which were based on transistors. In turn, several of these chips have already formed itself cPU.

Of course, the size of such processors is not much similar to modern ones. Plus, up until 1964, all processors had one problem. They demanded an individual approach - their own programming language for each processor.

  • 1964 IBM System / 360. A computer that is compatible with universal programming code. The instruction set for one processor model could be used for another.
  • 70s. The first microprocessors appear. Single-chip processor from Intel. Intel 4004 - 10 micron TP, 2,300 transistors, 740 kHz.
  • 1973 Intel 4040 and Intel 8008.3,000 transistors, 740 kHz for Intel 4040 and 3,500 transistors at 500 kHz for Intel 8008.
  • 1974 Intel 8080. 6 micron TP and 6000 transistors. The clock frequency is about 5000 kHz. This particular processor was used in the Altair-8800 computer. Domestic copy of Intel 8080 - KR580VM80A processor, developed by the Kiev Research Institute of Microdevices. 8 bit.
  • 1976 Intel 8080... 3 micron TP and 6500 transistors. The clock frequency is 6 MHz. 8 bit.
  • 1976 Zilog Z80. 3 micron TP and 8500 transistors. Clock frequency up to 8 MHz. 8 bit.
  • 1978 Intel 8086... 3 micron TP and 29,000 transistors. The clock frequency is about 25 MHz. The x86 command system still in use today. 16 bit.
  • 1980 Intel 80186... 3 micron TP and 134,000 transistors. The clock frequency is up to 25 MHz. 16 bit.
  • 1982 Intel 80286. 1.5 micron TP and 134,000 transistors. Frequency - up to 12.5 MHz. 16 bit.
  • 1982 Motorola 68000... 3 microns and 84,000 transistors. This processor was used in the Apple Lisa computer.
  • 1985 Intel 80386... 1.5 micron TP and 275,000 transistors. Frequency - up to 33 MHz in the 386SX version.

It would seem that the list could be continued indefinitely, but here Intel engineers faced a serious problem.

6. Moore's Law or how chipmakers live on

This is the end of the 80s. Back in the early 60s, one of the founders of Intel Gordon Moore formulated the so-called "Moore's Law". It sounds like this:

Every 24 months, the number of transistors placed on an integrated circuit chip doubles.

It is difficult to call this law a law. Rather, it would be christened empirical observation. Comparing the pace of technology development, Moore concluded that a similar trend could form.

But already during the development of the fourth generation of Intel i486 processors, engineers were faced with the fact that they had already reached the performance ceiling and could no longer accommodate more processors in the same area. At that time, technology did not allow this.

As a solution, an option was found using a number of additional elements:

  • cache memory;
  • conveyor;
  • built-in coprocessor;
  • multiplier.

Part of the computational load fell on the shoulders of these four nodes. As a result, the appearance of cache memory, on the one hand, complicated the design of the processor, on the other, it became much more powerful.

The Intel i486 processor already consisted of 1.2 million transistors, and its maximum operating frequency reached 50 MHz.

In 1995, AMD joined the development and released the fastest at that time i486-compatible processor Am5x86 on a 32-bit architecture. It was already manufactured according to a 350 nanometer process technology, and the number of installed processors reached 1.6 million pieces. The clock frequency has increased to 133 MHz.

But the chip makers did not dare to chase the further increase in the number of processors installed on the chip and the development of the already utopian architecture CISC (Complex Instruction Set Computing). Instead, American engineer David Patterson proposed to optimize the operation of processors, leaving only the most necessary computational instructions.

So processor manufacturers switched to the RISC (Reduced Instruction Set Computing) platform, but even that was not enough.

In 1991 a 64-bit R4000 processor was released, operating at 100 MHz. Three years later, the R8000 processor appears, and two years later - the R10000 with a clock frequency of up to 195 MHz. In parallel, the market for SPARC processors was developing, the architecture of which was the absence of instructions for multiplying and dividing.

Instead of fighting over the number of transistors, chip makers began to revise their architecture.... The rejection of "unnecessary" instructions, execution of instructions in one clock cycle, the presence of general value registers and pipelining made it possible to quickly increase the clock frequency and power of processors without perverting with the number of transistors.

Here are just a few of the architectures that emerged from 1980 to 1995:

  • SPARC;
  • ARM;
  • PowerPC;
  • Intel P5;
  • AMD K5;
  • Intel P6.

They were based on the RISC platform, and in some cases, partial, combined use of the CISC platform. But the development of technologies again pushed chip makers to continue building up processors.

In August 1999, the AMD K7 Athlon entered the market, manufactured using a 250 nanometer process technology and including 22 million transistors. The bar was later raised to 38 million processors. Then up to 250 million.

The technological processor increased, the clock frequency increased. But, as physics says, there is a limit to everything.

7. The end of the transistor competition is near

In 2007, Gordon Moore made a very harsh statement:

Moore's Law will soon expire. It is impossible to install an unlimited number of processors indefinitely. The reason for this is the atomic nature of matter.

It is noticeable to the naked eye that the two leading chip manufacturers AMD and Intel have clearly slowed down the pace of processor development over the past few years. The precision of the manufacturing process has grown to just a few nanometers, but it is impossible to accommodate more processors.

And while semiconductor manufacturers are threatening to launch multilayer transistors, drawing a parallel with 3DNand memory, the x86 architecture has a serious competitor 30 years ago.

8. What awaits "regular" processors

Moore's Law has been invalidated since 2016. This was officially announced by the largest processor manufacturer Intel. Chipmakers are no longer able to double computing power by 100% every two years.

And now processor manufacturers have a few unpromising options.

The first option is quantum computers... There have already been attempts to build a computer that uses particles to represent information. There are several such quantum devices in the world, but they are only able to cope with algorithms of low complexity.

In addition, the serial launch of such devices in the coming decades is out of the question. Expensive, ineffective and ... slow!

Yes, quantum computers consume much less energy than their modern counterparts, but they will also work slower until developers and component manufacturers switch to the new technology.

The second option is processors with transistor layers... Intel and AMD have seriously thought about this technology. Instead of one layer of transistors, they plan to use several. It seems that in the coming years, processors may well appear, in which not only the number of cores and clock frequency will be important, but also the number of transistor layers.

The decision has the right to exist, and thus the monopolists will be able to milk the consumer for another couple of decades, but, in the end, the technology will again hit the ceiling.

Today, realizing the rapid development of the ARM architecture, Intel has made a quiet announcement of Ice Lake chips. The processors will be manufactured using a 10-nanometer manufacturing process and will become the basis for smartphones, tablets and mobile devices. But this will happen in 2019.

9. ARM is the future

So, the x86 architecture appeared in 1978 and belongs to the CISC platform type. Those. in itself, it assumes the presence of instructions for all occasions. Versatility is the main strong point of x86.

But, at the same time, versatility played a cruel joke with these processors. X86 has several key disadvantages:

  • the complexity of the commands and their frank confusion;
  • high energy consumption and heat generation.

For high performance, I had to say goodbye to energy efficiency. Moreover, two companies are currently working on the x86 architecture, which can be safely classified as monopolists. These are Intel and AMD. Only they can produce x86 processors, which means that only they rule the development of technologies.

At the same time, several companies are engaged in the development of ARM (Arcon Risk Machine). Back in 1985, the developers chose the RISC platform as the basis for further architecture development.

Unlike CISC, RISC assumes the design of a processor with the minimum required number of instructions, but maximum optimization. RISC processors are much smaller than CISC, more energy efficient and simpler.

Moreover, ARM was originally created exclusively as a competitor to x86. The developers set a task to build an architecture more efficient than x86.

Since the 1940s, engineers have understood that one of the priority tasks remains to work on reducing the size of computers, and, first of all, the processors themselves. But almost 80 years ago hardly anyone could have imagined that a full-fledged computer would be smaller than a matchbox.

The ARM architecture was once supported by Apple, launching the production of Newton tablets based on the ARM6 ARM-processor family.

Sales of desktop computers are plummeting, while the number of mobile devices sold annually is already in the billions. Often, in addition to performance, when choosing an electronic gadget, the user is interested in several more criteria:

  • mobility;
  • autonomy.

x86 architecture is strong in performance, but if you abandon active cooling, a powerful processor will seem pathetic against the background of ARM architecture.

10. Why ARM is the undisputed leader

It is unlikely that you will be surprised that your smartphone, be it a simple Android or the flagship of Apple in 2016, is dozens of times more powerful than full-fledged computers of the late 90s era.

But how much more powerful is the same iPhone?

Comparing two different architectures in itself is a tricky thing. Measurements here can be performed only approximately, but you can understand the colossal advantage that smartphone processors built on the ARM architecture give.

A universal assistant in this matter is the artificial Geekbench performance test. The utility is available both on stationary computers and on Android and iOS platforms.

Mid-range and entry-level laptops are clearly lagging behind the performance of the iPhone 7. Things are a little more complicated in the high-end segment, but in 2017 Apple releases the iPhone X on the new A11 Bionic chip.

There, you already know the ARM architecture, but the Geekbench scores have almost doubled. Laptops from the "upper echelon" tensed.

But only one year has passed.

ARM is developing by leaps and bounds. While Intel and AMD demonstrate 5-10% performance gains year after year, over the same period, smartphone manufacturers manage to increase the power of processors by two to two and a half times.

For skeptical users who walk through the top lines of Geekbench, I just want to remind you: in mobile technologies, size is what matters above all.

Set up a powerful 18-core all-in-one PC that “rips apart ARM architecture to shreds” on your desk, then place your iPhone next to it. Do you feel the difference?

11. Instead of withdrawal

It is impossible to comprehend the 80-year history of the development of computers in one material. But after reading this article, you will be able to understand how the main element of any computer is arranged - the processor, and what to expect from the market in the coming years.

Of course, Intel and AMD will work to further increase the number of transistors on a single die and promote the idea of \u200b\u200bmultilayer elements.

But do you need that kind of power as a customer?

You are unlikely to be dissatisfied with the performance of the iPad Pro or the flagship iPhone X. I don’t think you are dissatisfied with the performance of a multicooker in the kitchen or the picture quality on a 65-inch 4K TV. But all these devices use ARM-based processors.

Windows has already officially announced that it is looking towards ARM with interest. The company included support for this architecture back in Windows 8.1, and is now actively working on a tandem with the leading ARM chip maker Qualcomm.

Google also managed to look at ARM - the operating system Chrome OS supports this architecture. Several Linux distributions appeared at once, which are also compatible with this architecture. And this is just the beginning.

Just try to imagine for a moment how pleasant it will be to combine an energy efficient ARM processor with a graphene battery. It is this architecture that will make it possible to obtain mobile ergonomic gadgets that can dictate the future.

4.61 out of 5, rated: 38 )

site Great article, pour the tea.