Developing predictive intelligence in neuroscience for learning how to generate multimodal medical data from a single modality can improve neurological disorder diagnosis with minimal data acquisition re-sources. Existing deep learning frameworks are mainly tailored for images, which might fail in han-dling geometric data (e.g., brain graphs). Specifically, predicting a target brain graph from a single source brain graph remains largely unexplored. Solving such problem is generally challenged with domain frac-ture caused by the difference in distribution between source and target domains. Besides, solving the prediction and domain fracture independently might not be optimal for both tasks. To address these challenges, we unprecedentedly propose a Learning-guided Graph Dual Adversarial Domain Alignment (LG-DADA) framework for predicting a target brain graph from a source brain graph. The proposed LG-DADA is grounded in three fundamental contributions: (1) a source data pre-clustering step using manifold learning to firstly handle source data heterogeneity and secondly circumvent mode collapse in genera-tive adversarial learning, (2) a domain alignment of source domain to the target domain by adversarially learning their latent representations, and (3) a dual adversarial regularization that jointly learns a source embedding of training and testing brain graphs using two discriminators and predict the training tar -get graphs. Results on morphological brain graphs synthesis showed that our method produces better prediction accuracy and visual quality as compared to other graph synthesis methods. (c) 2020 Elsevier B.V. All rights reserved.