This paper offers a workflow for generating synthetic point cloud data sets to be used in deep learning algorithms in tasks of modeling historical architectural elements. Documentation of cultural heritage is a time-consuming process that requires high precision. Computational and semi-automatic tools enhance conventional methods to shorten the duration of the documentation phase and increase the accuracy of the output. Photogrammetry and laser scanning are how geometrical data is acquired and delivered as a point cloud with position, color, and optionally normal vector information. Segmenting architectural elements based on our interpretations of this data is possible using deep neural networks but is limited when, despite the millions of points from one building, the data is insufficient in terms of variance and quantity. To overcome this limitation, we propose a semi-automatic synthetic data set generation using parametric definitions of historic architectural elements. We create a synthetic dataset, namely the Historical Dome Dataset (HDD), consisting of nearly 1000 dome systems with four semantic classes. We quantitatively and qualitatively analyze the usefulness of the HDD by training a number of modern neural networks on it. Our method of synthesizing point clouds can quickly be adapted into similar cultural heritage projects to prepare relevant data to accurately train deep neural networks and process the collected cultural heritage data.