The latest standard for video coding is versatile video coding (VVC) / H.266 which is developed by the joint video exploration team (JVET). Its coding structure is a multi-type tree (MTT) structure. There are two types of trees under the umbrella of the MTT structure. The first one is called a ternary tree (TT) and the second one is a binary tree (BT). Due to the use of brute force quest for residual rate distortion the quad tree and multi-type tree (QTMT) structure of coding unit (CU) split and contributes over 98% of the encoding time. This structure is efficient in coding, however increases computational complexity. Current paper proposes a deep learning technique to predict the QTMT based CU split rather than just the brute-force QTMT method to substantially speed up the time of encoding process for VVC/H.266 intra mode. In the first phase we developed an extensive database containing the ample CU splitting patterns with various streaming videos that is able to encourage the significant decrease of VVC/H.266 complexity by using data driven methods. in the Second phase, in accordance with the dynamic QT-MT structure at numerous locations, we suggest a multi-level exit CNN (MLE-CNN) model with a redundancy removal mechanism at different levels to determine the CU partition. In the third phase, for the training of MLE-CNN model we have established the adaptive loss function and analyzing the both unknown number of partition modes and the focus on RD cost minimization. Finally, a variable threshold decision system is established to achieve the targeted low complexity and RD performance. Ultimately experimental findings show that VVC/H.266 encoding time has reduced to 69.11% from 47.91% with insignificant bjontegaard delta bit rate (BDBR) to 2.919% from 1.023% which performs better than the existing futuristic and modern approaches.