To address the challenges in sparse-view 3D reconstruction, particularly reconstruction holes and accuracy degradation caused by insufficient Gaussians, a sparse-view 3D reconstruction method based on 3D Gaussian Splatting (3DGS) technology was proposed, namely DCSplat. This method utilized depth constraints to adaptively complete the point cloud required for 3DGS initialization and designed a random structural similarity loss to achieve fast and high-precision reconstruction of sparse-view images. The core of the method lay in the use of a proposed feedforward neural network to improve the sparse point cloud generated during the structure from motion (SFM) process. Firstly, a pre-trained monocular depth estimation network was used to predict depth information from the images. Secondly, a projection matrix was constructed using camera parameters to project the sparse point clouds onto the images, thereby establishing a correlation between point cloud’s z-values and depth values. Furthermore, a deep neural network was constructed and trained to map the depth values of image pixels to point cloud z-values, which was used to optimize and complete the point cloud information required for 3DGS. Additionally, to overcome the limitations of point-by-point optimization loss in 3DGS, a random structural similarity loss function was introduced, treating multiple Gaussians corresponding to pixels as a whole for processing. This enabled global consideration of the point cloud structure, thereby promoting more coherent and accurate 3D reconstruction. The test results of DCSplat on the local light field fusion (LLFF), large scale multi view stereotaxis evaluation (DTU), and unbounded anti aliasing neural radiance fields (MipNeRF360) standard datasets demonstrated that it achieved or even surpassed the performance level of existing methods on key evaluation indicators, including peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and learned perceptual image patch similarity (LPIPS), effectively improving the reconstruction quality. In addition, this method completed point cloud completion based on depth constraints, optimized reconstruction quality from global to local scales using depth information, and exhibited significant performance improvements across multiple indicators, thereby demonstrating certain application potential.