Technical Report PHD-2015-14

Title: Coding for New Applications in Storage Media
Authors: Artyom Sharov
Supervisors: Ronny Roth
Abstract: Conventional magnetic recording media are composed of basic magnetizable units called grains which might be random in size and shape. Information is stored on such media through a write mechanism that sets the magnetic polarities of the grains. Each grain can be magnetized to take on one of the two possible types of magnetic polarity. Thus, each grain represents at most one bit of information. If the boundaries of the grains had been known to the write and the readback apparatuses, then, theoretically, it would have been possible to attain the storage capacity of one bit per grain.

However, even if the write and the readback apparatuses were aware of the shapes and locations of specific grains forming the medium, in practice it would still be impossible to attain the capacity of one bit per grain, since the existing technology is still incapable of setting magnetic polarities of regions as small as a single grain. With the current state of technology, we need to partition the magnetic medium into a lattice of equally shaped cells first; the writing is then done on a per-cell basis. Since a cell is typically larger in size than a grain, the writing of a bit into a cell boils down to uniformly magnetizing all the grains inside the cell (whereas the grains straddling the boundary between cells can be neglected), thereby resulting in the recording rate of only one bit per cell.

Not long ago, Wood, Williams, Kavcic and Miles proposed an approach, which enables magnetizing areas as small as the size of grains, implying the feasibility of sharing the same grain between different adjacent cells and thereby effectively creating a different type of medium, where the polarity of a grain is determined by the last bit written into any cell sharing that grain. To simplify the original two-dimensional problem, the new medium can be modeled as a one-dimensional array of cells, over which a granular structure is imposed by means of grouping adjacent cells into grains of arbitrary lengths. Reading information from the cells is considered reliable, whereas writing within a cell overwrites the contents of all the cells pertaining to the same grain, thereby introducing substitution errors. This combinatorial error model was considered by Mazumdar, Barg and Kashyap, who called the codes correcting such errors as grain-correcting codes. A variation of that model allowing the substitution errors to overlap can be found in shingled writing on bit-patterned media.

A different way of modeling the same medium is a probabilistic one. Instead of introducing the granular structure, we can assume that errors can occur with a prescribed probability distribution only in the cells, whose value differs from the value of the previously written cell. This characterization introduces a one-dimensional channel, that is, a ``black box'' with inputs and outputs such that an output can differ from the corresponding input due to the adversary distortions of the medium, encapsulated within the box. We refer to a channel which describes (in this probabilistic manner) the behavior of granular media with overlaps as a (one-dimensional) generalized Ising channel. Additionally assuming the existence of a supplementary hardware reading the recorded sequence of bits and feeding them back to the encoder, we define generalized Ising channels with feedback, for every possible value of the channel error probability.

In this work, we present improved lower and upper bounds on the size and the asymptotical growth rate of grain-correcting codes with and without the assumption of the overlapping error patterns. We also give explicit interesting constructions of grain-correcting codes correcting small or large number of errors. We then switch to the problem of detecting grain errors and provide lower and upper bounds on the minimum redundancy of codes detecting any number of grain errors. Finally, we turn to the probabilistic setting and give tight bounds on the capacity of generalized Ising channels. For Ising channels with feedback corresponding to a certain range of values of the channel error probability, we establish a closed-form expression for the channel capacity; for channels corresponding to the remaining values of the error probability, we find tight lower bounds on the capacity.

CopyrightThe above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information

Remark: Any link to this technical report should be to this page (, rather than to the URL of the PDF files directly. The latter URLs may change without notice.

To the list of the PHD technical reports of 2015
To the main CS technical reports page

Computer science department, Technion