Memorism Processors

Glossary

The terminology used in the explanation of Memorism Processors is described in detail.

目次

Please note that this is an explanation concerning Memorism Processors that does not necessarily convey the accurate meaning of each term.


von Neumann-type computers

The computers that we use in our daily lives. This type of computer was invented out of military needs, such as making computerized calculations of the ballistics of cannons to make them more accurate. They are called “von Neumann-type computers” as a homage to the American mathematician John von Neumann, who made a prototype of modern computers based on the calculation method demonstrated by English mathematician Alan Turing (Turing machine).


CPU(Central Processing Unit)

The unit (device) that forms the core of information processing and is used in all information processing, not only arithmetic operations. Due to the so-called Neumann bus bottleneck, which is explained later, CPUs struggle with a considerable number of processes. Speeding up information processing by adding multiple cores and information processing optimization through cash memory are key processes in current CPUs. However, with miniaturization technology approaching its limit, it is believed that it will be increasingly more difficult to improve the performance of CPUs.
It should be noted that it is a waste to use the CPU, which is composed of expensive arithmetic elements, and the GPU (explained below) to process information detection tasks such as the search and collation of data (information).


GPU(Graphics Processing Unit)

Originally, the GPU was developed to perform image processing at high speed, but in recent years it has been widely used as a device that performs arithmetic operations and other essential AI-related processing. Arithmetic operations are sped up by installing thousands of operators in a chip, but its high power consumption level is an issue, and many researchers are looking for ways to make it more energy efficient.


Memory

Semiconductor devices that store data(information). There are two types of stored data: instruction data given to the CPU, and use data, such as database data, image, and voice. The three main types of memory used today are DRAM, SRAM, and flash memory, but there is a high demand for faster and non-volatile memories, and various types of memories are currently being researched. Memorism Processors can be implemented to DRAM, SRAM, and flash memory, but they can also be inserted into new types of memory, as long as it is a semiconductor memory.


Neumann bus bottleneck

A generic term for the limited transfer of data between the CPU and memory due to the physical separation between the two, which prevents the CPU from operating at its full capacity.


Data or Information detection problem

The processing most affected by the Neumann bottleneck is data or information detection processing.
As explained before, the only way to quickly perform processing such as “search, collation, recognition, authentication, classification, and sorting” is by creating an index in advance or by processing information using complex algorithms. However, this index and algorithms require pre-processing and updates, which not only prevents it from operating in real-time, but also decreases the performance and power efficiency of the whole system. It is also a highly specialized task, and for these reasons it is difficult to consider as a permanent solution. The previously explained bus bottleneck has been considered a problem since shortly after the development of current computers, but it has been neglected with no effective solution to date. This is one of the signs that computers today are used as if evading problems.
Due to the relevance and high frequency of this problem, our company call a “data detection problem “or “information detection problem” of von Neumann-type computers.


Complex algorithm for general data detection

To detect specific data at high speeds, it is necessary to use complex algorithms such as BTree and inverted index. The search performance improves drastically with these indexes, but they must be updated whenever the base data changes. This update is often a very demanding task that lowers the performance of the system.
It also requires tuning to optimize the data search processing that uses indexes, which is a highly specialized process.


Complex algorithms for image data detection

The algorithm used to recognize faces on images is called the Haar algorithm. For general image recognition, the HOG and SIFT algorithms are used, which focus on local characteristic amounts. All these algorithms are complex and highly specialized.


Machine Learning

Normal information processing is executed with a program providing a command to the CPU, but in the case of highly complex information processing such as the recognition processing function for which it is not possible to create a program, the recognition process is performed by making that function learn a lot of data. There are various methods for different types and purposes of objects to be recognized, and until those methods are optimized for use, it is a repetition of trial and error that is highly specialized.


Deep Learning

One of the neural network-based machine learning methods. The most remarkable characteristic of deep learning is the automatic acquisition of characteristic amounts, which helps deep learning recognize objects more accurately than conventional machine learning. One of the issues of deep learning is that it imposes a heavy computing burden and, as a result, it is necessary to use a processor such as a GPU with large arithmetic power. Moreover, there are various methods of deep learning, which are still far from being standardized.


Memorism Processors

A processor designed to take over information processing tasks from the CPU and GPU that von Neumann-type computers struggle with. Typical examples include “processing involving detection of data( information),” which is indispensable for future information societies based on big data, IoT, and AI.


In (Near) Memory Computing (IMC/NMC) and Process in Memory (PIM)

A generic computing term for semiconductor chips with an operating function embedded within and around the memory chip to solve the Neumann bus bottleneck. Now that semiconductor miniaturization technology is approaching its limit, it is more difficult to enhance the performance of CPUs and GPUs, which has boosted research on these semiconductor chips. Memorism Processors are pioneers in this field.


Heterogeneous Computing

Processing devices that are specialized in one part of information processing are called heterogeneous cores (a different type of CPU) regardless of whether it is parallelized processing. This term refers to computing that uses heterogeneous cores.
Since Memorism Processors comprise various types of cores, it is appropriate to call Memorism Processors -based computing Hyper Heterogeneous  Computing.


Non-von Neumann-type Computer

A generic term for computers other than von Neumann-type computers, such as quantum computers.


Competent assistant or Savior of von Neumann-type Computer

All computers other than CPU and memory-based von Neumann-type computers tend to be called “non-von Neumann-type computers,” but since Memorism Processors take over processing tasks that von Neumann-type computers struggle with, Competent assistant or Savior of von Neumann-type computer” is a more appropriate term.


Amdahl’s Law

Amdahl’s law is a general principle of information processing that models the relationship of speedup that can be expected by parallel implementation.
Memorism Processors focus on accelerating processing that cannot be parallelized or is inefficient if parallelized.

TOP