Today, due to faster processor speeds, parallel architectures, and especially multi-core processing, on-chip memory performance requirements are skyrocketing. SoC architects now need even faster memories. However, embedded memories can no longer be clocked as fast as processors or other logic on the same chip and this is causing performance bottlenecks which now pose one of the biggest challenges to new SoC product designs.
SoC architects want faster memories along with a better variety of memory configurations including multiple read and write ports that allow parallel accesses from multiple processor cores.
A new technology, algorithmic memories, breathes new life into embedded memory.
Algorithmic memories use algorithms synthesized in hardware to increase the performance of embedded memory macros up to ten times more. This technology is implemented in soft RTL. The resulting solutions appear exactly as standard multiport embedded memories. Using this approach requires no change to existing memory interfaces and ASIC design flows. The technology is both process node and foundry independent. Algorithmic memories open the door to allow system architects to rapidly and reliably create customized memory solutions that can be optimized for specific applications: in essence, making memory performance a configurable entity.
Figure 1 Algorithmic Memory Technology
Algorithmic Memory cores are created by adding logic to existing embedded memory macros enabling them to operate much more efficiently. Within the memories, sophisticated algorithms intelligently read, write, and manage data in parallel using a variety of techniques such as buffering, virtualization, pipelining, and data encoding. These techniques are woven together and operate seamlessly to create a new memory capable of processing an order of magnitude more Memory Operations Per Second (MOPS).
The increased memory performance capacity is made available to the system through additional memory ports such that many more requests can be processed in parallel within the same clock cycle. The resulting solutions appear exactly as standard multi-port embedded memories.
Algorithmic Memories can be drop-in replacements for existing embedded memories and can easily integrate into an ASIC design flow. Furthermore, because Algorithmic Memory technology is implemented in RTL, it works across any process node or any foundry.
Using Algorithmic Memory cores makes it possible to quickly create a comprehensive memory portfolio using just a small number of physical memories. For example, multiport memories that have the ability to process up to 10X more MOPS than the underlying single port memory can be generated.
Figure 2. Memory Portfolio Built Using Algorithmic Memory
Algorithmic Memory technology can also be used to lower memory area and power consumption without sacrificing performance. There is a significant area and power penalty when a higher performance memory is built using circuits techniques alone. With Algorithmic Memory technology, a lower performance memory—which typically has lower area and power—is combined with memory algorithms to generate a new memory. This Algorithmic Memory achieves the same MOPS as a high performance memory built using circuits alone, but uses significantly less area and power.
Algorithmic Memory meets the memory performance needs of next-generation applications. It is process, node, and foundry independent, and applies to a variety of SoC implementations such as ASICs, ASSPs, GPPs and FPGAs. New and customized memories can be designed and generated in a few days and require no further silicon validation. The resulting memories use a standard SRAM/DRAM interface with identical pinout and integrate seamlessly into a standard ASIC design flow, easing the adoption of next-generation process node technology.