This paper studies the lossless compression of rectified light-field images captured by plenoptic cameras, exploiting the high similarity existing between the subaperture images, or views, composing the light-field image. The encoding is predictive, where one sparse predictor is designed for every region of a view, using as regressors the pixels from the already transmitted views. As a first step, consistent segmentations for all subaperture images are constructed, defining the regions as connected components in the quantized depth map of the central view, and then propagating them to all side views. The sparse predictors are able to take into account the small horizontal and vertical disparities between regions in corresponding close-by views and perform optimal least squares interpolation accounting implicitly for fractional disparities. The optimal structure of the sparse predictor is selected for each region based on an implementable description length. The encoding of the views is done sequentially starting from the central view and the scheme produces results better than standard lossless compression methods utilized directly on the full lightfield image or applied to the views in a similar sequential order as our method.