In multivariate interpolation problems, increase in both the number of independent variables of the sought function and the number of nodes appearing in the data set cause computational and mathematical difficulties. It may be a better way to deal with less variate partitioned data sets instead of an N-dimensional data set in a multivariate interpolation problem. New algorithms such as High Dimensional Model Representation (HDMR), Generalized HDMR, Factorized HDMR, Hybrid HDMR are developed or rearranged for these types of problems. Up to now, the efficiency of the methods in mathematical sense were discussed in several papers. In this work, the efficiency of these methods in computational sense will be discussed. This investigation will be done by using several numerical implementations.