The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values.Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(n log n)-time solution via Fast Fourier Transform.Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by s max , the running time of our algorithm isÕ((n+t)s max ). If the item values are integers bounded by v max , our algorithm runs in timeÕ(n + tv max ). Best previously known running times were O(nt), O(n 2 s max ) and O(ns max v max ) (Pisinger, J. of Alg., 1999).At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time O(ne max ) when an approximation of the solution within an additive error of e max is available.Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n 2 s max 2 ) and O(n 3 s max 2 ) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of Ω(n max{1, n/s max }).