Porsche
Article about Porsche

Published 1 year, 4 months ago

Cars Porsche Published by J. Doe

Booster for AI calculations

Automated and autonomous driving functions are impossible to implement without AI. The required computing capacity is provided by special chips specialized in parallel computing. But researchers are also working on new, biologically inspired solutions as well as on quantum computers that promise even more computing capacity.

And because the required performance cannot be achieved with conventional chips, the hour has come for graphics processors, tensor processing units (TPUs), and other hardware specially designed for the calculations of neural networks.

That is due to the typical calculations that occur during the training of and inference process with neural networks. “The matrix multiplications in neural networks are very elaborate,” explains Dr. Markus Götz of the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT).

Google chip for AI applications

Google is a further newcomer in the chip business: since 2015, the technology company has been using self-developed TPUs in its data centers. This is why Google’s widely used software library for artificial intelligence is called TensorFlow—and the chips are optimized for them. In 2018, Google presented the third generation of its TPUs, which contain four “matrix multiplication units” and are said to be capable of 90 TFLOPS (Tera Floating Point Operations Per Second). The Google subsidiary Waymo uses TPUs to train neural networks for autonomous driving.

Application-specific chips like Tesla’s FSD or the TPUs from Google only become economical at large unit numbers. FPGAs can be easily adapted to the specific requirements of an AI application (for instance specified data types), which yields benefits in terms of performance and energy consumption.

Energy-efficient neuro-chips

On the one hand, they offer high computing capacity of approximately a quadrillion computing operations (1,000 TOPS) per module with 200,000 neurons. “In digital circuits, for example, there are some 10,000 transistors used for each operation,” explains Johannes Schemmel of Heidelberg University. “We get by with substantially fewer, which enables us to achieve roughly 100 TOPS per watt.” The researchers have just developed the second generation of their circuits and are talking to industry partners about possible collaborations.

Quantum power from the cloud

In the future, even quantum computers could be used in the field of AI. At the same time, quantum computers are difficult to implement because qubits are represented by sensitive physical systems like electrons, photons, and ions. The interior of the quantum computer must be fastidiously shielded against vibrations, electrical fields, and temperature fluctuations.

Nerve cells and artificial neurons Nerve cells receive their signals from other neurons via synapses that are located either on the dendrites or directly on the cell body. All inputs are totaled at the axon hillock and if a threshold is exceeded in the process, the nerve cell fires off a roughly millisecond-long signal that propagates along the axon and reaches other neurons.

It consists of the outputs of the neurons of the preceding layer and the weighting factor wi, in which the learning experience of the neural network is stored. These weighting factors correspond to the synapses and can also be excitatory or inhibitory. A configurable threshold value determines, like in a nerve cell, when the artificial neuron fires.

Learning from and inference with neural networks

Natural and artificial neural networks learn from changes in the strength of synaptic connections and the weighting factors. In deep neural networks, during training, data is fed to the inputs and the output compared with a desired result. Using mathematical methods, the weighting factor wij is continually readjusted until the neural network can reliably place images, for example, in specified categories. With inference, data is fed to the input and the output is used to make decisions, for example.

In both training and inference in deep neural networks (networks with multiple layers of artificial neurons), the same mathematical operations occur repeatedly. In brief Conventional computer chips reach their limits when it comes to calculations for neural networks. Graphics processors and special hardware for AI developed by companies such as Nvidia and Google are much more powerful. Neuromorphic chips are substantially similar to real neurons and work very efficiently.

Original article

Feb 13, 2020 at 10:07

MEDIA CONTENT

Image related to: Booster for AI calculations.

AMOUNT OF IMAGES

2 items

COPYRIGHT

Porsche

MORE ARTICLES