The task of dealing with errors (or correcting them) lies at the very heart of communication and computation. The mathematical foundations for this task were laid in two concurrent and interdependent works by Shannon and Hamming in the late 1940s. The two theories are strikingly powerful and distinct in their modelling of the error. Shannon's theory models errors as effected by a probabilistic/stochastic process, while Hamming envisions them as being introduced by an adversary. While the two theories share a lot in the underlying tools, the quantitative results are sharply diverging. Shannon's theory shows that a channel that corrupt (arbitrarily) close to 50% of the transmitted bits can still be used for transmission of information. Hamming's theory in contrast has often been interpreted to suggest it can handle at most 25% error on a binary channel. So what can we do if an adversary is given the power to introduce more than 25% errors? Can we protect information against this, or do we just have to give up? The notion of list-decoding addresses precisely this question, and shows that under a relaxed notion of "decoding" (or recovering from errors), the quantitative gaps between the Shannon and Hamming theories can be bridged. In this talk, we will describe this notion and some recent algorithmic developments.