Capturing RGB images and estimating their corresponding depth data for training deep models is a challenging task.Several deep network models have been recently reported to formulate the depth estimation process as an image reconstruction problem, in order to overcome the difficulty of scarcity of ground truth depth.These deep network models have multiple design decisions and parameters that are selected empirically, failing to capture the varying nature of the input and hence the adaptability is limited.In this paper, we propose an automatically Gaussian weighted deep model to achieve improved solutions for the problem of monocular depth estimation.
In comparison with the existing state of the arts, our proposed very deep model is click here supported by novel components, including a hybrid and integrated loss function and a fine training strategy.The hybrid and integrated loss function maintains the balance between appropriate assessments of perceptual similarity and modest resilience for both small and large scale errors, where different loss terms are automatically weighted and hence their integration is optimized via a Gaussian distribution based modelling.The fine training strategy is proposed to adaptively screen all the training images via an jerome brown jersey error clustering mechanism to sustain an effective and efficient training process.Extensive experiments are carried out and the results show that our proposed outperforms the compared seven benchmarks, representative of the existing state of the arts, across all the assessment metrics.