Non-volatile memories (NVMs) have become increasingly common in modern storage systems, offering the advantage of data persistence during power outages. However, several NVM technologies, such as flash and phase-change memory, support only a limited number of data writes, after which memory cells begin to wear out and can no longer reliably store data. Although these limitations have been extensively studied in the systems community, they remain surprisingly unexplored in theoretical models, revealing an unusually wide gap between theory and practice.
To bridge this gap, we define and study several optimization problems that capture the fundamental constraints of real-world memory systems. Specifically, this talk focuses on the unique algorithmic challenges arising in the context of SSD management. Building on an interval-based view of the input, we introduce two complementary approaches for optimizing SSD performance. We then demonstrate how these approaches can be applied in a learning-augmented online setting, where only partial (and possibly erroneous) information about the future is available.
For this setting, we present two algorithms with tight worst-case guarantees and demonstrate their practical effectiveness through empirical evaluation. Together, our results provide both theoretical and practical insights, and offer new perspectives at the intersection of memory management and online algorithms.