Conventional von Neumann architecture faces many challenges in dealing with data-intensive artificial intelligence tasks efficiently due to huge amounts of data movement between physically separated data computing and storage units. Novel computing-in-memory (CIM) architecture implements data processing and storage in the same place, and thus can be much more energy-efficient than state-of-the-art von Neumann architecture. Compared with their counterparts, resistive random-access memory (RRAM)-based CIM systems could consume much less power and area when processing the same amount of data. This makes RRAM very attractive for both in-memory and neuromorphic computing applications. In the field of machine learning, convolutional neural networks (CNN) are now widely used for artificial intelligence applications due to their significant performance. Nevertheless, for many tasks, machine learning requires large amounts of data and may be computationally very expensive and time consuming to train, with important issues (overfitting, exploding gradient and class imbalance). Among alternative brain-inspired computing paradigm, high-dimensional computing (HDC), based on random distributed representation, offers a promising way for learning tasks. Unlike conventional computing, HDC computes with (pseudo)-random hypervectors of D-dimension. This implies significant advantages: a simple algorithm with a well-defined set of arithmetic operations, with fast and single-pass learning that can benefit from a memory-centric architecture (highly energy-efficient and fast thanks to a high degree of parallelism).
Talent impulse, the scientific and technical job board of CEA's Technology Research Division
© Copyright 2023 – CEA – TALENT IMPULSE - All rights reserved