Light fields present a rich way to represent the 3D world by capturing the spatio-angular dimensions of the visual signal. However, the popular way of capturing light fields (LF) via a plenoptic camera presents a spatio-angular resolution tradeoff. Computational imaging techniques such as compressive light field and programmable coded aperture reconstruct full sensor resolution LF from coded projections obtained by multiplexing the incoming spatio-angular light field. Here, we present a unified learning framework that can reconstruct LF from a variety of multiplexing schemes with minimal number of coded images as input. We consider three light field capture schemes: heterodyne capture scheme with code placed near the sensor, coded aperture scheme with code at the camera aperture and finally the dual exposure scheme of capturing a focus-defocus pair where there is no explicit coding. Our algorithm consists of three stages 1) we recover the all-in-focus image from the coded image 2) we estimate the disparity maps for all the LF views from the coded image and the all-in-focus image, 3) we then render the LF by warping the all-in-focus image using disparity maps. We show that our proposed learning algorithm performs either on par with or better than the state-of-the-art methods for all the three multiplexing schemes. LF from focus-defocus pair is especially attractive as it requires no hardware modification and produces LF reconstructions that are comparable to the current state of the art learning-based view synthesis approaches from multiple images. Thus, our work paves the way for capturing full-resolution LF using conventional cameras such as DSLRs and smartphones.Index Terms-Light field resolution trade-off, compressive light field imaging, coded aperture photography, disparity based view synthesis.