With the recent development of new three-dimensional (3D) multimedia services such as 3D television or free viewpoint television, a new 3D video format, called multiview video + depth (MVD) is currently being investigated. MVD allows synthesizing as many views as required at the receiver side, thus providing smooth scene transitions and the ability to experience a new 3D perspective with each different viewing point. The format introduces, alongside traditional 2D image sequences, sequences of depth maps, which must be efficiently coded to achieve good quality for the synthesized views. One approach to code depth videos is to exploit the correlations between texture and depth. In this work, we propose a new tool to code depth videos in which the texture Intra modes are inherited and used as predictors for the depth Intra modes, hence reducing the mode signaling bitrate. The tool is only used in prediction units where texture and depth Intra directions, or modes, are expected to match. Two criteria that exploit the statistical dependency between the texture and depth Intra modes are studied in this work: GradientMax and DominantAngle. Average bitrate reductions of 1.3 and 1.6 on synthesized sequences are reported for GradientMax and DominantAngle, respectively. The latter method additionally achieves 2.3 bitrate reduction on depth sequences.A three-dimensional (3D) representation of a video can be achieved by multiplexing two views of the same scene (Stereo format), recorded by two different cameras into one stereoscopic display. While the Stereo format is currently dominating the 3D video market, the development of services such as 3D television (3DTV) or free viewpoint television (FTV) creates a need for a more fluid representation of the scene, which can only be obtained if more than two views are multiplexed simultaneously on the 3D display. The multiview video + depth (MVD) format allows to have a large number of views at the receiver side, with a reduced coding cost compared to the multiview video (MVV) format. This format is promising and thus, standardization activities are currently focusing on drafting an High Efficiency Video Coding (HEVC)-based [1] (3D-HEVC) and an AVC-based [2] (3D-AVC) 3D video coding standard that is able to exploit all the spatial, temporal, inter-view, and intercomponent (between texture and depth) redundancies in an MVD video.In MVD, depth cameras complement ordinary texture cameras. Each texture video has an associated depth video, accounting for objects distance to the camera. After encoding and transmission, the reconstructed texture videos and depth videos can be fed into a view synthesizer that, using the geometric information of depths, generates the required number of intermediate views. Depth frames, commonly called depth maps, have unique characteristics that make them inherently less costly to code than texture frames.Numerous tools found in the literature attempt to efficiently code depth maps. Some tools exploit redundancies between texture and depth in order to ach...