C.3.3 Lessons Learned

The information presented in this case study is only a small portion of the interesting information learned by implementing ISM during this field demonstration. Other aspects that will be presented in a final report on this study include the time required for various aspects of the sampling; cost comparisons (based on time, resources, and analytical costs, including sample preparation); Monte Carlo simulations from the discrete and ISM data to test theories on the value of the information; evaluation of the observed data distributions and their similarity to the simulated distributions used in Section 4; and comparison of results from ground vs. unground samples.

From the analyses presented herein, there are a few important ideas to consider:

  • In cases where the concentration of the COC is near the threshold of interest, it is prudent to be aware that any conclusions made about the site are based on a sample of data, and even if collected in a careful and appropriate manner, it may or may not lead to the same conclusion that would be reached based on another sample of the data. The expected variability in sample results is a primary reason why a common DQO is to collect sufficient data to calculate a UCL for a parameter estimate.
  • Partitioning DUs into subareas may provide an opportunity to discern spatial differences that would not be apparent if ISM samples were collected from the entire DU as a whole.

  • Discrete sampling is generally expected to yield a distribution of results with approximately the same arithmetic mean but higher SD, SE, and 95% UCL than ISM sampling of the same DU.

  • For this site, there was no added benefit to increasing the number of increments from 30 to 100 per ISM sample. For locations in which the sample mean and corresponding 95% UCL are close to a decision threshold, increasing the number of increments can reduce the SE (and corresponding UCL) enough to alter the decision. The challenge for most sites, particularly in the absence of pilot data, is that a risk assessor typically lacks a priori knowledge about how close the population mean may be to a decision threshold.