מוחמד אגבאריה, הרצאה סמינריונית למגיסטר
יום רביעי, 22.5.2019, 09:00
Instead of using full cycle-accurate simulations, recent virtual memory studies frequently utilize linear models to predict application runtimes from TLB misses. The benefit of the latter methodology is that it is easier and much faster than the former, but its correctness has never been validated. In fact, previous studies were not even able to measure their prediction errors, because they defined the linear models by collecting at most two empirical data points: the application runtimes when using either 4KB or 2MB pages.
We test the validity of this methodology by developing Mosalloc, a new user-level memory allocator that allows us to generate multiple data points by systematically varying the quantity and location of the regular pages and hugepages that back the address space of applications. We find:
(1) that real runtimes might deviate significantly from linear predictions by up to 1.7x, which means existing runtime models are inadequate;
(2) that, in contrast, polynomial predictors of degree not higher than three---made possible by Mosalloc---are sufficiently flexible to accurately model runtimes;
(3) that TLB misses might increase the runtime by more than just the table walk cycles they cause, whereas existing models assume a slope of at most 1; and
(4) that using more hugepages to back an application's address space might yield inferior performance to using fewer hugepages.