The role of cooperation in managing interference -a fundamental feature of the wireless channel -is investigated by studying the two-user Gaussian interference channel where the source nodes can both transmit and receive in full-duplex. The sum-capacity of this channel is obtained within a gap of a constant number of bits. The coding scheme used builds up on the superposition scheme of Han and Kobayashi for the two-user interference channel without cooperation. New upperbounds on the sum-capacity are also derived. The same coding scheme is shown to obtain the sum-capacity of the symmetric two-user Gaussian interference channel with noiseless feedback within a constant gap. 3 The source nodes 1 and 2, and the destination nodes 3 and 4 receive respectivelywhere the channel coefficients h's are complex numbers and N k (t), k = 1, 2, 3, 4, t = 1, 2, . . . are independent and identically distributed (i.i.d.) zero-mean Gaussian random variables with unit variance. It is easy to see that, without loss of generality, we may consider a channel where the channel coefficients h 1,3 , h 1,2 , h 2,4 , h 2,1 are replaced by their magnitudes |h 1,3 |, |h 1,2 |, |h 2,4 |, |h 2,1 |, and the channel coefficient h 1,4 is replaced by |h 1,4 |e jθ/2 and h 2,3 is replaced by |h 2,3 |e jθ/2 , where θ def = arg(h 1,4 ) + arg(h 2,3 ) − arg(h 1,3 ) − arg(h 2,4 ). We will consider this channel. We will also assume that |h 1,2 | = |h 2,1 | = h C , say, which models the reciprocity of the link between nodes 1 and 2. Further, we consider unit power constraints which is without loss of generality when both sources have the same power constraint.There is a causality restriction on what the sources are allowed to transmit: it can only depend on the message it sends and everything it has heard up to the previous time instance, i.e.,where M k is the message to be conveyed by source k and f is a (deterministic) encoding function. A blocklength-T codebook of rate (R 1 , R 2 ) is (for each k = 1, 2) a sequence of encoding functions, f k,t , t = 1, 2, . . . , T such that E 1 The correct notation would be Y1(t) = h * (t) 2,1 (X2(t)) = h2,1X2(t) + N1(t). This t-index in the notation for random functions like h * (t) 2,1 will be suppressed. We will tacitly assume that application of * -ed functions for different values of t result in independent realizations of N 's.