Locally decodable codes are error-correcting codes that admit efficient decoding 
algorithms: They give a method to encode k bit messages into n bit codewords 
such that even after a constant fraction of the bits of the codeword get 
corrupted any bit of the original message can be recovered by only looking at 
q(k) bits of the corrupted codeword. The tradeoff between the rate of a code 
(i.e., the ratio k/n) and the locality/efficiency (the function q(k)) of its 
decoding algorithms has been  studied extensively in the past decade. However 
most prior work has focused on codes with extremely small q (e.g., constant 
functions), and the resulting constructions suffer from extremely poor rate 
(k/n goes to zero; in fact n grows nearly exponentially in k). Indeed it was 
widely believed that such behavior is inherent: Known codes with locality 
k^{1/c} had rate 2^{- O(c)}; and no code with non-trivial locality (q = o(k)) 
and rate > 1/2 (the practical regime) was known. 
In this talk we overcome the rate 1/2 barrier and give a new class of codes with 
very high rates (arbitrarily close to 1) with strong local decoding properties 
(q(k) = k^{epsilon} for arbitrarily small epsilon), thereby giving new 
performance tradeoffs between the rate and locality of decoding. These codes, 
which we call  multiplicity codes, are based on evaluating high degree 
multivariate polynomials and their derivatives. Multiplicity codes extend 
traditional multivariate polynomial based codes; they inherit the 
local-decodability of these codes, and at the same time achieve better 
tradeoffs and flexibility in the rate and decodability. These codes give 
respectable parameters even in concrete settings (and not just in asymptotic behavior), 
giving hope that local decoding techniques may have practical implications. 
Based on joint work with Swastik Kopparty (IAS) and Sergey Yekhanin (MSR, 
Silicon Valley)