A research team has developed a new technique that could increase the memory capacity of computers and mobile electronics, freeing them up to perform more tasks and run faster.
Researchers from the Massachusetts Institute of Technology (MIT) have devised a new method called Zippads to compress data structures called objects across the memory hierarchy, reducing memory usage while improving performance and efficiency.
“The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL), said in a statement.
The new technique builds on a previously developed programed dubbed Hotpads that stores entire objects into tightly packed hierarchical levels called pads that reside entirely on efficient, on-chip, directly addressed memories without requiring a memory search. Programs are able to directly reference the location of all objects across the hierarchy of pads.
Newly allocated and recently references objects will stay in the faster pad and when the level fills, the system runs an eviction process to kick down older objects to slower levels, while recycling the objects that are no longer useful.
Zippads leverages the Hotpads architecture to compress objects. Objects that start the faster level are uncompressed but become compressed as they are evicted to the slower levels. Pointers in all objects across levels then point to the compressed objects, making them easy to recall back and store more compactly.
The researchers also created a compression algorithm that leverages redundancy across objects efficiently and uncovers more compression opportunities. The algorithm first picks a couple of representative objects as bases, allowing the system to only store the different data between new objects and base objects.
The new approach could ultimately benefit programmers in any modern programming language that store and manage data in objects such as Java, Python and Go, without changing their code. Consumers would also benefit with faster computers that will allow more applications to be run at the same speeds. Each app would also consume less memory, while running faster, allowing the user to simultaneously perform tasks on multiple apps.
“All computer systems would benefit from this,” co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL, said in a statement. “Programs become faster because they stop being bottlenecked by memory bandwidth.”
For computer systems, data compression improves performance by reducing the frequency and data programs need to retrieve from the main memory system.
Memory in modern computers manage and transfers data in fixed-sized chunks, where traditional compression techniques must operate. However, because software uses data structures that contain various types of data and have variable sizes, traditional hardware compression techniques often have difficulty.
The researchers tested their new technique on a modified Java virtual machine, finding that it compressed twice as much data, as well as reducing memory usage by half over traditional cache-based methods.
The new technology was presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems in Providence from April 13-17.