文档介绍:Foundations and TrendsR in
puter Science
Vol. 2, No 2 (2006) 107–195
c 2006 V. Guruswami
DOI:
Algorithmic Results in List Decoding
Venkatesan Guruswami
Department puter Science & Engineering, University of Washington
Seattle WA 98195, USA, ******@
Abstract
Error-correcting codes are used to cope with the corruption of data
by noise munication or storage. A code uses an encoding
procedure that judiciously introduces redundancy into the data to pro-
duce an associated codeword. The redundancy built into the codewords
enables one to decode the original data even from a somewhat distorted
version of the codeword. The central trade-off in coding theory is the
one between the data rate (amount of non-redundant information per
bit of codeword) and the error rate (the fraction of symbols that could
be corrupted while still enabling data recovery). The traditional decod-
ing algorithms did as badly at correcting any error pattern as they
would do for the worst possible error pattern. This severely limited the
maximum fraction of errors those algorithms could tolerate. In turn,
this was the source of a big hiatus between the error-correction per-
formance known for probabilistic noise models (pioneered by Shannon)
and what was thought to be the limit for the more powerful, worst-case
noise models (suggested by Hamming).
In the last decade or so, there has been much algorithmic progress
in coding theory that has bridged this gap (and in fact nearly elimi-
nated it for codes over large alphabets). These developments rely on
an error-recovery model called “list decoding,” wherein for the patho-
logical error patterns, the decoder is permitted to output a small list of
candidates that will include the original message. This book introduces
and motivates the problem of list decoding, and discusses the central
algorithmic results of the subject, culminating with the recent results
on achieving “list decoding