Samsung Electronics said on Wednesday that it has developed high-bandwidth memory (HBM) that integrates artificial intelligence processing capabilities. The new memory processing (PIM) architecture adds an AI engine to the Samsung HBM2 Aquabolt, which was first launched in 2018.
Samsung claims that this chip, called HBM-PIM, doubles the performance of the AI system and reduces power consumption by more than 70% compared to traditional HBM2.
The South Korean tech giant explained that this is possible because installing an AI engine in each repository can maximize parallel processing capabilities while minimizing data movement.
Samsung said that by providing this improved performance, it expects the new chip to accelerate large-scale processing in data centers, high-performance computing systems, and AI-enabled mobile applications.
Samsung added that HBM-PIM uses the same HBM interface as earlier versions, which means that customers will be able to apply the chip to their existing systems without changing any hardware and software.
The company’s paper on the chip will be published at the Virtual International Solid-State Circuits Conference next week.
The chip is currently being tested in a customer AI accelerator, which is scheduled to be completed in the first half of this year. Samsung is also working with customers to build an ecosystem and standardize the platform.