Artificial neurons a million times faster than ours

neurones artificiels million plus rapides que notres

⇧ [VIDÉO] You may also like this partner content (after the announcement) Producing artificial neurons that are more efficient than human neurons… the idea is not new. But researchers at MIT have just taken it to a whole new level. They claim to have created an artificial neural network capable of working a million times faster than humans. This feat would have been achieved using an “analog” neural network. But what is the point of creating artificial neurons? To understand this, we must return to the notion of “neural network”. Using the Research Federation’s definition of the brain, neurons can be seen as the “basic working unit” of the brain. They are specialized cells. They transmit information to other nerve cells, depending on their area of ​​specialization. They are generally made up of: the dendrite, which receives a nerve signal a soma, the cell body which decodes it an axon, which transmits it These neurons are connected to each other by synapses, which connect the axon and the dendrite. They communicate using electrical signals, called “action potentials” – this is what releases neurotransmitters. The latter are “chemical messengers” in charge of passing through the synapses to transmit information. So we have a natural neural network. An artificial neural network is related to the field known as “artificial intelligence”. In fact, it is a system that is “fed” a large amount of data to “learn” and extract logical connections in view of a certain goal. These learning methods are inspired by the functioning of biological neurons, which is why we speak of an “artificial neural network”. A learning system inspired by biological neurons In fact, the data sent circulates in an artificial “grid” of neurons, usually virtual. They are actually points on the network connected by computer code (in a way, the synapses). Therefore, this network receives the input information, training data and transmits the output information. In both cases, we find a phenomenon of “learning” that involves data processing. In our (biological) brain, the connections between neurons, synapses, are strengthened or weakened by experience and learning. In an artificial neural network, the principle is somewhat similar: the links between the network points are weighted according to the processing of a large amount of data. That’s why we talk about deep learning. The novelty that the scientists present here is a neural network that performs these calculations very quickly and with little energy requirements. For this reason, they explain that they were not based on a digital neural network, but an analog one. So, back to the difference between analog and digital. Analog and digital are two different processes. Both allow data to be transported and stored. For example, an audio, an image, a video… The analog system appeared from the beginning of electricity. On the other hand, the digital appeared with the computer. In an analog system, the basic principle is to reproduce the signal to be recorded in a similar way. Digital and analog For example, analog television worked on this principle. The image to be retransmitted was converted into electrical signals, which were called “video signals”, characterized by their frequency, that is, the number of oscillations in one second. These electrical signals were relayed using an electromagnetic wave that is made to follow the same amplitudes as the original signal. The transmitted signal is therefore a kind of “reproduction” of the original signal. In digital, the signal to be recorded is converted into a sequence of 0’s and 1’s. The amplitudes are therefore no longer reproduced, but encoded and decoded on arrival. This is what changed with the switch to digital TV, as the video below explains well. In digital, we therefore obtain a signal with two amplitudes instead of an infinity in analog. Until now, artificial neural networks work mainly on the digital principle. Therefore, the weights of the network are programmed using learning algorithms, and the calculations are done using sequences of 0’s and 1’s. However, it is by applying an analog system that the MIT scientists have managed to create, according to them, a neural network much faster and faster. efficient than in humans. A million times faster, to be exact. In an analog deep learning system, it is not the transmission of data in the form of 0’s and 1’s that comes into play, but “the increase and decrease of the electrical conductance of the proton resistors” that allow the machine learning, we can read in the MIT press release. Conductance is defined as the ability to let current flow (the inverse of resistance). “Conductance is controlled by the movement of protons. To increase conductance, more protons are pushed into a channel in the resistor, while to decrease conductance, protons are removed. This is achieved by using an electrolyte (similar to of a battery) that conducts protons but blocks electrons.” Electrical resistance is a physical property of a material that limits the flow of electrical current in a circuit. Therefore, a component that has this property is used to limit the passage of electrons in the circuit. In the present case, it therefore represents a key element, since it is what regulates the movement of protons. Strong resistance to electrical impulses Why does this process allow the neural network to work faster? “First, the calculation is performed in memory, so large data loads are not transferred from memory to a processor,” the scientists explain. “Analog processors also perform operations in parallel. If the size of the matrix increases, an analog processor does not need more time to perform new operations, because all calculations occur simultaneously. Thus, the speed achieved is counted in nanoseconds. If this was possible, it is also because the scientists used a specific material: inorganic phosphosilicate glass (PSG), a material similar to that found in desiccant bags. This material is a very good conductor, because it has many nanometric pores that allow the passage of protons, while at the same time it can withstand high pulsed electrical voltages. This quality was essential according to the scientists, since it is this robustness that allows them to apply greater electrical voltages, and therefore obtain such a high speed. “The action potential of biological cells rises and falls on a time scale of milliseconds, because the voltage difference of about 0.1 volts is limited by the stability of water,” he explains lead author Ju Li, Battelle Energy Alliance Professor of Nuclear Science and Engineering. and professor of Materials Science and Engineering, “Here we apply up to 10 volts through a special film of nano-thick solid glass that conducts protons, without permanently damaging it. And the stronger the field, the faster are the ionic devices.” The scientists hope to be able to redesign this system to make it suitable for high-volume manufacturing. They have high hopes for this advance: “Once an analog processor is developed, it is no longer necessary to train the networks that everyone works on, but networks with unprecedented complexities, which no one else can afford, surpassing everything that was previously possible. In other words, it’s not a faster car, it’s a spaceship,” adds Murat Onen, lead author and postdoctoral fellow at MIT. Source: Science
#Artificial #neurons #million #times #faster


Please enter your comment!
Please enter your name here