In recent years, considerable progress has been made in semantic segmentation of images with favorable environments. However, the environmental perception of autonomous driving under adverse weather conditions is still very challenging. In particular, the low visibility at nighttime greatly affects driving safety. In this paper, we aim to explore image segmentation in low-light scenarios, thereby expanding the application range of autonomous vehicles. The segmentation algorithms for road scenes based on deep learning are highly dependent on the volume of images with pixel-level annotations. Considering the scarcity of labeled large-scale nighttime data, we performed synthetic data collection and data style transfer using images acquired in daytime based on the autonomous driving simulation platform and generative adversarial network, respectively. In addition, we also proposed a novel nighttime segmentation framework (SFNET-N) to effectively recognize objects in dark environments, aiming at the boundary blurring caused by low semantic contrast in low-illumination images. Specifically, the framework comprises a light enhancement network which introduces semantic information for the first time and a segmentation network with strong feature extraction capability. Extensive experiments with Dark Zurich-test and Nighttime Driving-test datasets show the effectiveness of our method compared with existing state-of-the art approaches, with 56.9% and 57.4% mIoU (mean of category-wise intersection-over-union) respectively. Finally, we also performed real-vehicle verification of the proposed models in road scenes of Zhenjiang city with poor lighting. The datasets are available at https://github.com/pupu-chenyanyan/semantic-segmentation-on-nightime.