Depth estimation and image deblurring from a single defocused image are fundamental tasks in Computer Vision (CV). Many methods have previously been proposed to solve these two tasks separately, using Deep Learning (DL) powerful learning capability. However, when it comes to training the Deep Neural Networks (DNN) for image deblurring or Depth from Defocus (DFD), the mentioned methods are mostly based on synthetic training datasets because of the difficulty of densely labeling depth and defocus on real images. The performance of the networks trained on synthetic data may deteriorate rapidly on real images. In this work, we present Indoor Depth from Defocus (iDFD), a Depth And Defocus Annotated dataset, which contains naturally defocused, All-in-Focus (AiF) images and dense depth maps of indoor environments. iDFD is the first public dataset to contain natural defocus and corresponding depth obtained using two appropriate sensors, DSLR and MS-Kinect camera. This dataset can support the development of DL based methods for depth estimation from defocus and image deblurring by providing the possibility to train the networks on real data instead of synthetic data. The dataset is available for download at iDFD.