Image synthesis techniques are crucial for the development of autonomous driving, aiming to provide training and testing data for autonomous driving systems in a cost-effective manner. With the development of computer vision and artificial intelligence (AI) technologies, neural radiance fields (NeRF), 3D Gaussian splatting (3DGS), and generative modeling have attracted much attention in the field of image synthesis. These new paradigms show great potential in autonomous driving scene construction and image data synthesis. Recognizing the importance of these methods for the development of autonomous driving technology, their development history was reviewed and the latest research works were collected, and the methods were re-examined from the practical perspective of the autonomous driving image synthesis problem. The progress of NeRF, 3DGS, generative modeling, and reality-virtual fusion synthesis methods in the field of autonomous driving was introduced, with special focus on NeRF and 3DGS, two reconstruction-based methods. First, some important issues were analyzed for the task of autonomous driving image generation, followed by detailed examination of representative schemes of NeRF and 3DGS in terms of the limited viewpoint problem, large-scale scene problem, dynamics problem, and acceleration problem faced by autonomous driving scenes. Considering the potential benefits of generative models for creating corner cases of autonomous driving, practical issues and existing research works on the use of autonomous driving world models for scenario generation were also presented. Then, the cutting-edge applications of virtual-reality fusion for autonomous driving image synthesis were analyzed, as well as the potential of NeRF and 3DGS combined with AI generative modeling for the task of autonomous driving scenario generation. Finally, current achievements were summarized and future research directions were outlined.