Retinography is a frequently used imaging method that aids in the clinical diagnosis of eye disorders. Low contrast and being exposed to noise are the primary factors in degraded retinal fundus images. These factors make it challenging for medical experts to diagnose and classify diseases in retinal images. This manuscript proposes a hybrid fusion approach for vascular tree segmentation in color fundus images. We propose to use a fusion model that combines supervised deep convolutional neural networks with unsupervised approaches. The training fundus images were preprocessed in an unsupervised way to increase the success of the deep U-Net architecture and fed into the network as parallel channels. Preprocessing steps include the following stages: grayscale conversion, median filtering, CLAHE, mathematical morphology operations, Coye filtering, connected component analysis, and data augmentation. The proposed approach was tested on publicly accessible DRIVE and HRF datasets. Sensitivity, specificity, accuracy, and F1-score measures are compared on high and low-resolution datasets. In summary, results reveal that the performance of the parallel channel-based deep approach is higher than the baseline deep model and achieved state-of-the-art results in the literature, especially on the HRF dataset. Besides, the fusion of the predictions of only the unsupervised image processing-based models achieved the best accuracy among unsupervised works in the literature on the DRIVE dataset. Moreover, the proposed unsupervised preprocessing-based approach does not add a significant computational burden on the deep learning model training. Additionally, the proposed hybrid method has noticeably increased the sensitivity rate on both datasets.