Nanoindentation is a commonly accepted technique that measures local mechanical properties. The timescale for a traditional nanoindentation test is on the order of minutes, allowing for spatially varied measurements on the order of 100’s of indents. With the introduction of high throughput (XPM) indentation methods, datasets in the tens of thousands and, as recently demonstrated, up to one million indents can now be gathered as a map. These methods include clustering to identify similar mechanical properties in different parts of inhomogeneous materials. A significant challenge is to quantitatively study the systematic differences in the material properties being studied such as hardness and modulus that are related to the clustering. A Monte Carlo simulation approach could address this, if such a such a simulation could be universally applied to the different materials from different data sets effectively. A different approach is to use the dataset itself for resampling. The use of resampling by the bootstrap method has been applied to many different problems for the last 40 years after the pioneering work of Efron. While the non-parametric bootstrap requires only resampling with replacement, the parametric bootstrap requires some modeling of the underlying probability distribution function (PDF). By application of machine learning methods, probability distribution functions with ~ 10 dimensions can be modeled. By rapidly resampling the modeled PDF, it is possible to generate a large sample of simulated data and use this data to study the robustness of different clustering algorithms. The resulting systematic uncertainties can be compared with the observed statistical uncertainties for the given material.