The goal of DNAPL ISC is iterative development of a CSM with sufficient depth and clarity to evaluate risks and develop appropriate remediation strategies.
Integrated site characterization is a process for improving the efficiency and effectiveness of characterization efforts at DNAPL sites. It encourages characterization at a sufficient resolution to capture the effects of the heterogeneities that direct contaminant distribution, fate, and transport, and remediation effectiveness, so that an integrated three-dimensional CSM can be developed and refined. The CSM should distinguish among transport and storage zones and identify relevant mass.
DNAPL sites have too often been characterized at a resolution insufficient for this understanding, and it is therefore reasonable to equate ISC with high(er) resolution site characterization; however, ISC should focus on whatever resolution is needed to adequately determine contaminant distribution, fate, and transport, and thereby define and effectively remediate (if necessary) any site risk.
ISC supports iterative refinement of the CSM over the project life cycle with information obtained during site investigation, remedy design, and remedy optimization. Similar to the USEPA's data quality objectives (DQOs), it relies on a systematic objectives-based site characterization process that includes defining the uncertainties and CSM deficiencies; determining the data needs and resolution appropriate for site conditions; establishing clear, effective data collection objectives; and designing a data collection and analysis plan (Figure 4.1 and Section 4.1). Through ISC, the most appropriate and up-to-date site characterization tools are selected to effectively characterize site stratigraphy, permeability, and contaminant distribution. Once the data are collected, the process includes evaluating and interpreting the data and updating the CSM.
ISC is the most effective way to develop CSMs that address groundwater contamination in general and DNAPL in particular. ISC involves eight new concepts based on the current understanding of DNAPL and aqueous-phase plume behavior and the controlling effects of hydrogeologic heterogeneities and matrix diffusion. These new concepts—which represent a substantial reconsideration of the data necessary to develop effective CSMs for DNAPL sites in both unconsolidated and consolidated hydrogeologic settings—are discussed below.
- Heterogeneity replaces homogeneity. The assumption of subsurface homogeneity has led to successful modeling and problem solving in the water supply field. In many cases, however, this assumption does not promote an understanding of groundwater contaminant fate and transport. Furthermore, understanding the scale of the controlling heterogeneities is crucial, as both micro- and macro-scale geologic heterogeneities play a controlling role in the fate and transport of both DNAPL and dissolved-phase contamination.
- Anisotropy replaces isotropy. In many cases, the assumption of an isotropic (uniform in all directions) subsurface has not provided an adequate understanding of groundwater contaminant fate and transport. As with heterogeneity, geologic anisotropyThe property of being directionally dependent (as opposed to isotropy, which means homogeneity in all directions). (directional dependence) plays a controlling role in the fate and transport of both DNAPL and dissolved-phase contamination.
- Diffusion replaces dispersion. Based on the above two concepts, it is now known that matrix diffusion largely controls lateral (y-dimension) and vertical (z-dimension) contaminant distribution in many subsurface systems (Hadley and Newell 2013).
- Back-diffusion is a significant source. When contaminant concentrations are greater in higher-permeability media, they diffuse into lower-permeability media. Once this matrix diffusion has occurred and contaminant concentrations have decreased in the higher-permeability media (due to remediation or natural attenuation), contaminants then back-diffuse into the higher-permeability zones. Back-diffusion is based on this reversed concentration gradient and can act as a long-term source of dissolved-phase contamination to higher-permeability unit(s). At late-stage sites, plumes are sustained primarily by back-diffusion rather than by DNAPL dissolution.
- Non-Gaussian distribution replaces Gaussian. Geologic deposits are not typically distributed in a Gaussian (normal) fashion. Therefore, statistical methods that assume a normal distribution are often ineffective for understanding, characterizing, and predicting contaminant fate and transport. The actual distribution of permeability within geologic deposits can often be represented by a lognormal distribution rather than a Gaussian distributionNormal distribution. See http://hyperphysics.phy-astr.gsu.edu/hbase/math/gaufcn.html. or, less commonly, a nonparametric distribution where the cumulative distribution function is estimated from observed data.
- Transient-state replaces steady-state. While conditions at a site may appear to be in a steady state over portions of its life cycle, equilibrium is dynamic. It changes as the plume migrates, ages, and degrades; as source materials are depleted or migrate; and as new geologic features are encountered by migrating contamination.
- Nonlinear sorption replaces linear sorption. Many mathematical models (for example, BIOSCREEN) used for predicting DNAPL contaminant fate and transport in subsurface systems assume linear sorption of reactive solutes. Nonlinear sorption processes can dramatically alter contaminant transport, delaying the appearance and sharpening the plume front, and result in prolonged plume tailing. The latter effect can be confused with or misinterpreted as either rate-limited mass transferThe irreversible transport of solute mass from the nonaqueous phase (that is, DNAPL) into the aqueous phase, the rate of which is proportional to the difference in concentration. between mobile-immobile water (physical non-equilibrium) or rate-limited sorption-desorptionThe process in which atomic or molecular species leave the surface of a solid and escape into the surroundings. (chemical non-equilibrium).
- Nonideal sorption replaces ideal sorption. Nearly all groundwater transport models assume that dissolved solutes exhibit ideal sorption behavior in equilibrium. Ideal sorption behavior indicates that the adsorption (forward reaction) and desorption (reverse reaction) processes are reversible, yielding identical isotherms at equilibrium; however, aging or prolonged soil-contaminant exposures may result in nonideal behavior, where it is difficult to remove the contaminant from the solid phase, even with aggressive extraction procedures. This can result in persistent release of the contaminant from impacted aquifer solids and, as above, can be confused with either nonlinear desorption or rate-limited mass transfer.
ISC relies on objectives-based data collection, which provides a scientifically defensible foundation for characterization activities and helps define data needs and manage project uncertainty. ISC is a systematic, stepwise process similar to the USEPA’s DQOs, which employ the Triad approach: (1) systematic project planning; (2) dynamic work strategies; and (3) real-time measurement technologies.
ISC can be applied at any stage—development of the preliminary CSM, baseline characterization, CSM characterization, CSM design, CSM remediation/mitigation, or post-remedy (USEPA 2011a)—or when troubleshooting a nonperforming remedy.
Although not intended as a rigid sequence, Figure 4-1 illustrates the main elements of ISC. Appendix A provides case examples that illustrate the first five steps of ISC. Particular attention is focused on how data collection objectives were established for specific reasons, and in some cases modified, as the CSM was refined with additional data.
The following reference materials provide additional information about systematic project planning and the USEPA’s Triad approach:
- Technical and Regulatory Guidance for the Triad Approach: A New Paradigm for Environmental Project Management (ITRC 2003)
- Best Management Practices: Use of Systematic Project Planning Under a Triad Approach for Site Assessment (USEPA 2010)
- Improving Decision Quality: Making the Case for Adopting Next Generation Site Characterization Practices (Crumbling, D. M., J. Griffith, and D. M. Powell 2003)
The goal of DNAPL ISC is to develop a CSM with sufficient depth and clarity to accurately assess risks and develop appropriate remediation strategies. The first step of the ISC approach is to review the current CSM and determine its adequacy against that goal. If a problem becomes apparent, it should be defined in terms of uncertainties/deficiencies with the CSM so that data needs/gaps and resolution can be identified and characterization objective(s) established. An advantage of defining the problem in terms of uncertainties is that it can help determine the cost benefit, or sustainable ROI, of collecting additional data.
The baseline for ISC is any existing site information that helps develop an accurate, representative CSM. Existing data generated using traditional investigation approaches are valuable in formulating a CSM from which to identify initial data needs and gaps; however, the quality of that existing data and the sophistication of that CSM may be less than optimal. Conventional soil and groundwater characterization involved the use of soil borings and monitoring wells to collect relatively coarse subsurface interval sampling (for example, soil samples every 5 feet and groundwater samples from 10 ft screened monitoring wells). At some sites, soil samples were only collected from the unsaturated zone because it was assumed that anything below the groundwater table was best characterized by groundwater samples from monitoring wells. As a result, conventional CSMs were often founded in precise data, yet provided an inaccurate representation of contaminant distribution.
Following are some important considerations when reviewing existing site information and evaluating its usefulness:
This initial review phase should focus on determining what is known about DNAPL use and releases at a site; depending on the available surface and subsurface data, it should describe the heterogeneity due to lateral and vertical lithologic changes, depict the hydrostratigraphic framework from the paleoenvironmental characteristics or the structural features controlling flow, and explain how resolution of existing data affects the reliability and usability of any existing CSM. An initial draft CSM should be created if none exists.
The case study (Appendix B.3) describing Reese Air Force Base in Lubbock, Texas, illustrates a complex pattern of mass fluxMass flux (J) is a contaminant load (mass) per time per unit area. It is a general term for performing mass-flux- or mass-discharge-type calculations. laterally and vertically downgradient from the source. The aquifer is composed of a very heterogeneous system of interbedded sediments varying from gravels to clays, deposited by alluvial fans and braided streams. The existing monitoring well network was effective at quantifying groundwater concentrations and identifying potential risks, but the long well screens provided limited information on detailed plume structure. The initial phase of the project required reassessment of groundwater concentrations using all available data. This included a sitewide synopticA general view of the whole. data set collected using all of the investigation wells (>500 wells) and remediation wells (~50), as well as grab samples from over 100 private irrigation and supply wells within and adjacent to the plume. The revised plume map revealed two significant findings:
- The contaminant distribution and its movement are highly structured.
- A significant volume of the aquifer previously identified as contaminated was clean.
Historically, data sets often were smaller and of lower-resolution spatially than those more commonly collected within the past decade. In addition, historical data sets were often limited by higher detection limits than are available today or did not acknowledge the potential for temporal variation in contaminant concentrations. For example, many large sites are monitored on a continuous, rolling basis over periods of months to years, yet interpretations are made from data sets spanning months to years. Further, hydraulic data sets may be based on slug tests that interrogated a relatively small volume of the subsurface. Historical data sets often include higher detection limits that do not adequately characterize the vapor intrusion pathway and are not adequate for decision making during later stages of a project life cycle.
Therefore, historical data may or may not be usable when evaluating a CSM. Comparing the results of historical data with new data requires knowledge of historical data collection and analysis methods. The project team must understand the historical data collection methods, analytical procedures and sampling plans that influence the historical data set, and the usability of those data. Often, historical data sets can be integrated with new, often higher-resolution, data sets when the limitations of the historical data set are acknowledged and incorporated into the updated CSM.
Another case study (Appendix B.2) illustrates the effect of incomplete site characterization on the final CSM. Three dry cleaner sites in Indiana were in the process of remedial action, but uncertainty in the CSMs for each site led to the need for further characterization. The decision to conduct further characterization was based on concerns over vapor intrusion. The three sites are reasonably close together and were assumed to have identical CSMs. Fairly high-densityDescribes the mass per unit volume of the DNAPL, and is sometimes expressed as specific gravity, which is the density relative to water. vertical and horizontal soil sampling was conducted at one of the sites, and the results were applied to the sampling plans for the other two sites. When PCE concentration in soil gas could not be explained by the CSM, a dynamic work plan was developed to define the subsurface lithology controlling the aqueous and vapor transport of PCE. Direct-push sampling was conducted, and an on-site lab analyzed the soil gas and groundwater samples. At all three sites, soil PCE source areas, aqueous-phase distribution of PCE, and soil gas (vapor intrusion) pathways were delineated.
The following data should be reviewed: the types of contaminants that were used on site, where they were stored, how they were transported, the waste disposal methods used, and where they may have been unintentionally or intentionally been released to the environment. These data should be used to determine potential releases, release period (dates), release sites, and possible contaminant source zones on site.
Existing data are often ignored or misinterpreted. When existing data are inadequate to determine the vertical lithologic variability, the following questions should be asked:
- Are there continuous lithologic data in the form of high-quality continuous core descriptions? Logs of such cores may contain valuable information, including bedding thickness, degree of interbedding of fine- and coarse-grained lithology, indications of clay types and content, and descriptions of sedimentary or tectonic structures.
- Are there MIP, hydraulic conductivityThe capability of a geologic medium to transmit water. A medium has a hydraulic conductivity of unit length per unit time, if it will transmit in unit time a unit volume of groundwater at the prevailing viscosity through a cross section of unit area, (measured at right angles to the direction of flow), under a hydraulic gradient of unit change in head through unit length of flow. profiling tool (HPT), or cone penetrometer (CPT) data, or only 18 inch samples every 5 feet with Unified Soil Classification System classification?
- Can MIP, HPT, or CPT data be calibrated against lithologic descriptions or analytical data, and therefore serve as a proxy for lithologic data or contaminant distribution? Once the data are understood, they can be ranked according to reliability and resolution.
In Case Example 1 (Appendix B.1), it was established that thermal treatment would be effective on a coal tarA brown or black liquid of extremely high viscosity. Coal tar is one of the resultant byproducts when coal is carbonized to make coke or gasified to make coal gas. Coal tars are complex and variable mixtures of phenols, polycyclic aromatic hydrocarbons, and heterocyclic compounds. DNAPL site if the spatial distribution of the DNAPL were clearly defined within differing geologic units. Required data included the volumes of discrete lithologic logs, porosity, saturationRepresents the proportion of the subsurface pore space within a REV that is occupied by a fluid (either DNAPL, air, or water), ranging from 0 to 1.0. When multiple fluids are present, the sum of all fluid saturations equals 1.0. DNAPL saturation very rarely approaches 1.0, because the NAPL typically shares pore spaces with water or air, and most porous media are water wetting., and properties of the DNAPL. Evaluating the subsurface involved an adaptive management approach with a number of physical and chemical investigative tools and visual methods.
My site has been characterized using conventional techniques. Do I need to redo this work using the higher resolution methods?
If you think your existing site conceptual model is sound and the site management strategy has been successful, an extensive supplemental site characterization program is not needed.
However, if questions remain about key components of the site conceptual model—e.g., hydrogeology; contaminant distribution, fate, and transport properties; and risk—additional characterization using high-resolution techniques can be both beneficial and cost-effective. Some sites may not have been precisely delineated by conventional characterization methods (e.g., soil borings and monitoring wells); in such cases, high-resolution techniques can provide clarity on how to move forward in the site remediation/ management process
The following data should be reviewed: the types of contaminants that were used on site, where they were stored, how they were transported, the waste disposal methods used, and where they may have been unintentionally or intentionally released to the environment. These data should be used to determine potential releases, release period (dates), release sites, and possible contaminant source zones on site.
While groundwater quality data presented on figures are commonly in call-out boxes or shown as isoconcentration contour maps for the primary contaminant(s) of concern (COCs), this approach often results in incomplete interpretation of groundwater quality data. One way to enhance the interpretation of groundwater quality data is to prepare pie charts depicting the chemical signature at each sampling point. A chemical signature is the relative abundance of COCs. When preparing these pie charts, color schemes should take into account the relationship between various compounds. For example, at sites affected by chlorinated solvents, the chlorinated ethenes can be shown using red for PCE, orange for TCE, bright yellow for cis- and trans-1,2-dichloroethene, and pale yellow for vinyl chloride. Similar related color schemes can be assigned to any series of related compounds to enable rapid visual interpretation of chemical signature data on plan-view maps or cross-sectional diagrams.
This approach is critical, as interpretation of chemical concentration data alone is often misleading; groundwater chemistry data collected from monitoring wells represent flow-weighted averages of aquifer conditions in the well vicinity. It is common to find monitoring wells installed along the periphery of a historical source. In such cases, the concentrations of contaminants detected are typically orders of magnitude lower than those present within a short distance of the well screen; however, the chemical signature of those contaminants is typically consistent with a source area or plume core signature (that is, enriched in parent compounds), whereas the chemical signature detected outside of a source area or plume core is often relatively enriched in degradation products. Thus, the use of chemical signature data can enable interpretation of source areas and plume cores that are easily missed when relying on chemical concentration data alone.
In cases where fracture porosity dominates fluid flow and contaminant transport, sufficient data must be acquired to characterize the fractures in terms of spatial orientation, distribution, interconnectivity, and potential for transport or storage of contaminants. Once the strengths and weaknesses and vertical and lateral resolution of the existing site data are understood, hydrogeologic and chemistry data can be integrated to produce an initial CSM and identify data needs/gaps.
Once the uncertainties in the CSM are recognized, specific data needs (for example, type, location, amount, and quality) as well as data resolution (spacing or density) can be described. Spatial resolution should be assessed laterally and vertically. The goal is to achieve a data resolution related to the scale of subsurface heterogeneity that is effectively controlling contaminant transport and distribution. Data resolution should be commensurate with that scale to ensure that the distribution of contaminants is sufficiently delineated and that an effective remedial strategy, if necessary, can be developed.
The necessary resolution may be different for different areas of the site or phases of the project, and depends on the depositional environment (see Appendix A). Collecting system design information may require higher resolution sampling, while determining potential for risk and necessity of remedial action may necessitate a lower vertical resolution to make that determination. One way to cost-effectively achieve the appropriate resolution is to collect collaborative data (see Section 4.7) by taking advantage of the speed and coverage of real-time reconnaissance tools like MIP and laser induced fluorescence (LIF) to target areas of contamination for higher vertical resolution (USEPA 2010). At appropriate locations, the slower, more costly techniques of higher-resolution geology/stratigraphy and quantitative contaminant evaluations are used, which helps to limit high-resolution vertical sampling in areas where real-time tools do not indicate contamination.
Determining the correct resolution of data to collect can be difficult. The locations (plan view) and frequency (vertical) of samples are based on the initial understanding of the site prior to deployment. The density of data varies depending on site-specific data collection objectives for each of the data types (geology, hydrogeology, and chemical). For example, if a site has highly varying stratigraphy, more geologic and hydrogeologic data will be required than at a site with less stratigraphic variability.
An effective approach for determining the correct density of data required is to use on-site, real-time analysis coupled with efficient drilling techniques (see Case Example 2, Appendix B.2). Cost-effective tools are available for real-time collection of geology, hydrogeology, and contaminant distribution data; often, direct-push and sonic drilling tools are used. These tools are discussed in detail in Tool Selection Worksheet.
With real-time results, project managers are able to identify subsequent sampling locations based on the evolving CSM (see Section 4.1). If on-site, real-time data are not used, there is a risk of oversampling (involving increased and unnecessary costs) or undersampling (resulting in an inaccurate final CSM, which then requires redeployment and additional sampling). The real-time data approach allows for efficient allocation of available resources to collect the required density of data to produce a final CSM. The final CSM should meet project characterization objectives and contain an acceptable amount of uncertainty in the understanding of the geology, hydrogeology, and contaminant distribution.
It is important to recognize and elucidate the shortcomings of the existing CSM to develop an efficient path forward. A solid understanding of the data collected and work performed at the site to date, coupled with a hydrostratigraphic framework founded on the concepts of facies and depositional environments, provides a clear picture of what is known about the subsurface and a road map for identifying data gaps and developing data collection objectives. This process could involve the following tasks:
- Identify data outliers and, if they are artifacts of data resolution, formulate hypotheses to explain why.
- Understand any remedial actions past and present, as well as off-site conditions that affect the CSM (for example, contaminants entering the site, groundwater pumping). Classify the scale of variability of the stratigraphy and contaminant distribution in the subsurface. This requires careful definition of DQOs to ensure correct tool selection; the tools should provide sufficient resolution and accuracyAccuracy of an analytical measurement is how closely the result corresponds to the true value. This normally requires the use of standards in carefully calibrating the analytical methods. to map the stratigraphy and classify the behavior and distribution of contaminants. Often, site-specific calibration and verification are required to determine the limits of applicability and utility of selected tools to meet screening and quantitative objectives before full-scale characterization efforts are undertaken.
Once the data needs (including type and resolution) are identified, specific objectives can be established. Often data collection objectives are vague statements that do not fully describe the intentions and needs of a sampling program—for example, an objective might be to define the lateral and vertical contaminant distribution, and without further specificityAnalytical specificity is the ability of an assay to measure a particular constituent or parameter rather than others in a sample., it would be difficult to demonstrate that this objective was met. In this example, the characterization objective should be developed in a way that considers (1) the type of data needed (for example, chemical concentrations); (2) the data density and spatial resolution (for example, lateral and vertical spacing and depth); and (3) the specific concentration endpoints for each contaminant.
The lack of specificity also makes selection of appropriate data collection/investigation tools challenging and could easily lead to misapplication or a recharacterization effort later on. To avoid this, objectives should be continually parsed into increasingly specific sub-objectives, until they are sufficiently succinct and the specific data needs become clear (see Appendix B for examples).
A characterization effort is not a disparate assembly of site data, nor is it an intent, for example, to collect mass dischargeMass discharge (Md) is a contaminant load past a transect (mass per time). It can also be referred to as cumulative mass flux, mass discharge, or mass flux. data across a site. Assumptions and known conditions about a contaminated site can lead to the selection of specific treatment technologies, both of which have discrete treatment capabilities and costs. To optimize their application, a focused effort to characterize specific parameters of the site may be required. At Well 12A (Appendix B.4), a multicomponent treatment approach was required based on previous characterization data, earlier treatment results, and multiple performance reviews. To do so, a detailed characterization effort was implemented with the following objectives:
- evaluate contaminant mass extents across the source area
- map different remediation technologies across the site
- develop SMART (specific, measureable, achievable, relevant, and time-bound) objectives for individual remediation technologies (ITRC 2011b)
- evaluate methods for measuring contaminant mass discharge and select one to use for the remedial action objective compliance metric (ITRC 2010)
The specific data collection objectives included the following:
- describe the major stratigraphic units of the upper aquifer containing contaminant mass
- quantify hydraulic properties of the stratigraphic units within the contaminant zone
- map the contaminant distribution of contaminants within the stratigraphic units, including NAPL and soil and groundwater contaminant levels
- estimate contaminant volume and mass
- estimate contaminant mass discharge within stratigraphic units
- map contaminant mass discharge delivered to extraction wells of the groundwater extraction and treatment system
A three-dimensional model was used to define the source and plume boundaries and to evaluate uncertainty.
ITRC champions the use of SMART remediation objectives for DNAPL sites. Although data collection objectives are not bound to meet all SMART attributes, they should be as specific as possible given what is and is not known about the site. This helps to ensure that characterization activities are driven by clear, focused, specific objectives.
Following are examples of the types of questions that can lead to development of effective data collection objectives depending on site conditions and geologic environment:
- What problem is being investigated?
- What decisions should be made?
- What are the uncertainties and project risks?
- What is the scale of the controlling heterogeneities at the site?
- Is matrix diffusion occurring at the site and what role does it play?
- Are fractures possibly transporting contaminants and what role do they play?
- What quantity of data is required and at what resolution?
- What quality of data is needed?
- What are the cost-benefits of collecting more data?
- How quickly is the information needed?
Table 4-1 provides examples of effective data collection objectives for DNAPL sites.
Data collection and analysis is simply the implementation of the chosen data measurement system and the subsequent organization of the collected data. Three types of data—quantitativeQuantitative (Q) A tool that provides compound-specific values in units of concentration based on traceable standards (e.g., μg/L, ppm, and ppbv, semi-quantitative Semi-quantitative (SQ) A tool that provides compound-specific quantitative measurements based on traceable standards but in units other than concentrations (e.g., ng or ug) or provides measurements within a range), and qualitative(QL) A tool that provides an indirect measurement (e.g. LIF and PID measurements provide a relative measure of absence or presence, but are not suitable as stand-alone tools for making remedy decisions.—are generally collected. All may be collected and analyzed differently. Effective data collection objectives determine the type of data collection required, which tools to use, and how the data will be analyzed. The Tool Selection Worksheet will aid in selecting the most appropriate tool.
The Tool Selection Worksheet describes conventional and new sampling and logging techniques for collecting direct measurements, as well as sensor-based technologies. Because of the complex nature of DNAPL sites, which can involve mass distribution in the NAPL, soil, groundwater, and vapor phases, it is important to start with an approach that is designed to resolve the scale of heterogeneity of contaminant phase(s), concentration, and composition in the unsaturated and saturated zones. By collecting stratigraphic and permeability data at the same time, it is possible to discern the controlling influence that subsurface architecture (permeability and structure) has on mass distribution and interphase mass transfer. The key is to collect data at sufficient frequency, in both the vertical and horizontal directions, to ensure that the mass transport behavior of the system can be classified at a minimum, and characterized explicitly when possible in simpler geological settings, early in the characterization process. Tool selection typically depends on geologic conditions, logistical considerations, and DQOs. For example, at a site contaminated by chlorinated solvents and underlain by stratified sand and silt deposits, one of the following approaches could be used:
- A number of different tools could be used to collect data of adequate resolution. An MIP could be used to collect vertical semi-quantitative contaminant distribution data at the desired depth increment (for example, 30 cm, 50 cm, or 100 cm intervals). If coupled with a CPT, HPT, or electrical conductivity (EC) dipole array, high-density, vertical, semi-quantitative geologic information could be collected.
- Continuous soil cores could be collected using a variety of drilling methods, sub-sampled at intervals ranging from inch to foot scale, and field screened or analyzed using a field or fixed laboratory.
Both approaches could produce data sets with resolutions adequate for generating rigorous CSMs; however, the first approach would likely be faster and cheaper than the second approach, although the data would be less quantitative. If the same release occurred at a site underlain by glacial till deposits, the tool selection would likely change. Due to their compact nature and the common presence of cobbles and boulders in glacial tills, use of the MIP might be infeasible, while continuous soil sampling using sonic drilling techniques would remain practical. Depending on the target investigation depths and nature of the till, continuous soil sampling using direct-push drilling techniques might also be feasible, and on-site field contaminant analysis (using a mobile laboratory) of continuous core samples at the desired resolution can provide data sets that produce a rigorous CSM.
Logistical considerations also affect tool selection. Small-scale direct-push drill rigs can be used to access most site settings (for example, inside buildings, in wetlands using temporary roads, in alleyways), whereas larger-scale track-mounted CPT or sonic drill rigs cannot fit in some target investigation areas. Full-scale truck-mounted drill rigs, direct-push rigs, CPT rigs, and sonic drill rigs are further restricted by their size; however, these larger rigs possess greater power and can typically drill to greater depths through more difficult geologic conditions than can the smaller rigs. As the cost of drilling increases (due to more difficult geology and greater depths), the importance of real-time data for cost-effective drilling increases as well. Having a flexible work plan and an on-site laboratory to allow contaminant information to be plotted on maps and cross sections during the investigation helps allow the choice of effective sampling locations and can significantly reduce the overall project drilling budget while ensuring that enough usable data are collected to create a robust CSM.
Data quality objectives also affect tool selection. Using the examples presented above (chlorinated solventOrganic compounds with chlorine substituents that are commonly used for industrial degreasing and cleaning, dry cleaning, and other industrial processes. release at a site underlain by stratified sand and silt deposits), if the target detection limit is 1 microgram per liter for PCE, the MIP would not be a viable tool due to sensitivitySensitivity is the smallest amount of a substance in a sample that can accurately be measured by an assay. limitations. In that case, the WaterlooAPS or the HPT-groundwater sampler could be used to collect high-density hydrostratigraphic data and discrete-interval groundwater samples for analysis using a mobile or fixed laboratory. This would result in a decrease in the vertical resolution of the contaminant distribution data and would likely prohibit collection of groundwater samples from low-permeability zones, but the contaminant data would meet the DQOs that require contaminant speciation and low-level concentration data. The low-permeability zones could be investigated using soil sampling and analysis for selected locations and depths where determination of the stored dissolved-phase contaminant is an important project DQO. This approach would generate a data set that could be used to develop a rigorous CSM and achieve the project objectives.
Data limitations should be taken into consideration before tool selection to ensure that the site characterization goals are met with an acceptable level of uncertainty. For example, collecting and evaluating qualitative data prior to quantitative data incorporates the limitations of qualitative data into the data analysis and interpretation process. Consider the following performance characteristics when selecting the tools that best meet the project needs:
- specificityability of an assay to measure a particular constituent or parameter rather than others in a sample
- sensitivityThe smallest amount of a substance in a sample that can accurately be measured by an assay
- accuracyThe accuracy of an analytical measurement is how close a result comes to the true value. This normally requires careful calibration of the analytical methods using standards.
- precisionPrecision is the reproducibility of multiple measurements and is usually described by a standard deviation, standard error, or confidence interval.
- turnaround time
- training requirements
By recognizing limits in the selectivity, sensitivity, accuracy, and precisionPrecision is the reproducibility of multiple measurements, usually described by a standard deviation, standard error, or confidence interval. of qualitative data, yet capitalizing on rapid turnaround times and lower training limits, the project team can quickly and cost-effectively develop a CSM to direct focused qualitative site characterization efforts.
When characterizing a site, a substantial volume of data are generated. Many of the tools described in this document generate electronic data that must be managed and interpreted, and this large volume of electronic data provides both opportunities and challenges. The importance of managing data is noted by USEPA (2011a) as: “…the ability to efficiently access and interpret data is essential to guiding project teams through the entire cleanup process, from project planning to site completion.”
Plan for Data Management
It is just as critical to plan how the data will be managed as it is to select the appropriate tools.
A significant challenge in using tools such as the MIP or LIF is that the data they collect are considered qualitative or semi-quantitative and must be integrated, managed, and interpreted along with the quantitative data (for example, contaminant concentrations, hydraulic conductivity). Qualitative and semi-quantitative data frequently have unique quality assurance/quality control measures; they typically are not validated or assigned flags, as may be done for laboratory analytical data. In addition, data from profiling-type tools can represent many individual data points, because they measure parameters at high density (cm to inch scale) with depth and time. All of the above factors can make these data more difficult to manage than data collected solely from point measurements; however, the profiling and logging tools often provide information about contaminant distribution and hydrogeologic architecture that could not be accomplished with conventional point sampling techniques and lab analyses due to budget limitations.
Data from the tools described in this document are typically provided to the consultant or site owner after the end of the field mobilization. In some cases, data can be interpreted in real time to support the field decision-making process. The data format may be digital images or logs, field notes, spreadsheets, or plots of parameters versus depth for logging tools. The data should be archived and transferred into whatever data management tool has been selected for the project. Ideally, the data management tool is capable of handling all of the types of data to be generated as part of the characterization effort. Thus, the data management needs of the project should be considered during tool selection/evaluation. Data management options can range from commercial off-the-shelf database programs to complex three-dimensional visualization software.
When the appropriate data management and visualization tools are used, it is possible to efficiently store, interpret, and present large volumes of electronic data. Higher-end data management tools (for example, visualization software) can provide capabilities for data analysis and communication/presentation. Therefore, just as it is critical to consider the strengths and limitations of each characterization tool in the selection process, it is also important to consider how the data from those tools will be managed and integrated with other data from the site (see Appendix D).
Data collection is generally an expensive process; therefore, it is imperative to glean as much information as possible from the data. As previously stated, three types of data are generally collected: quantitative, semi-quantitative, and qualitative. During DNAPL site characterization, the appropriate data types are collected and the appropriate tools are used to answer questions posed by the following data collection objectives:
Monitoring Wells and Bias
Monitoring wells, as traditionally constructed and used, are not recommended as primary characterization tools in unconsolidated aquifers because of vertical and volumetric averaging of contaminant concentrations associated with this type of sampling. Bias is also introduced in how wells are sampled, because the volume and intensity of sampling further affects the vertical and volumetric averaging, making conclusions based on monitoring wells unreliable.
- Screening method. Qualitative tools may be used to further refine the understanding of site conditions or direct further data collection and analysis. Once existing data and qualitative data are assembled into a collaborative data sets (see USEPA  Triad Resource Center), such data can be interpreted to improve a CSM and direct quantitative data collection.
- Fill in the gaps. If not included in the existing CSM, develop a quantitative linkage between the stratigraphy and permeability of the aquifer so that transport zones can be distinguished from storage zones and mapped across the investigation area. In essence, this phase of characterization is focused on selecting and using tools that can map the mass flux in the aquifer. This is where higher-resolution data collection is recommended to ensure adequate horizontal and vertical resolution for understanding contaminant transport at the site. Historically, it was assumed that a plume could be accurately mapped from the inside out, by stepping out at large distances from the source to map the spatial extents, without understanding how mass flux is distributed at the site; however, different quantitative tools (and combinations of tools) are now available, and required to accurately detect and map the occurrence of DNAPL and high-concentration source zones, compared to moderate or maximum contaminant level concentrations in the distal portions of the plume. Different tools may be required to provide quantitative measurements of dissolved-phase contaminants in permeable transport zones compared to less permeable storage zones. Further specialization of characterization tools is required when contamination is deep, or when it occurs in different types of bedrock.
- Map the extent of the contamination to enable definition of the source zone(s) and the distal dissolved-phase plume(s). In this step, the site CSM is used to define DQOs for specific zones, potentially subdivided further based on hydrostratigraphy for large plumes within complex geologic settings. Guided by the knowledge of the behavior in the transport zones and storage zones, it is possible to begin to optimize the application of tools and adapt the frequency and location of quantitative sampling necessary to delineate the source and dissolved-phase plumes. Practitioners are advised to focus on tools that enable remedy decision making and risk assessment early in the process to avoid having to repeat quantitative sampling at field-screening locations, especially when it is impractical to correlate quantitative results with field screen measurements.
Use of multilevel devices (for example, packer and port systems or discrete interval direct-push samplers) with hydrologic characterization methods (for example, HPT and EC or CPT logging, multilevel slug tests, hydraulic tomography) and chemical sampling provide an integrated and adequate level of resolution to the characterization process. This level of characterization translates into a more informed selection of the remedy (Einarson 2006; McCall et al. 2006; Dietrich et al. 2008; Kober et al. 2009; Dietrich and Dietz 2012). In addition, packers in the wells avoid concentration averaging and migration of contaminants to new or less-contaminated zones.
This guidance provides an interactive Tool Selection Worksheet that is useful in selecting tools to characterize contaminated sites. The Tool Selection Worksheet offers a rapid method of identifying the appropriate tools and information for collecting geologic, hydrologic, and chemical data. Specific tools and techniques are listed in the following categories within the left column of the Tool Selection Worksheet:
- surface geophysics
- downhole testing
- hydraulic testing
- single well tests
- cross borehole testing
- flow metering
- vapor and soil gas sampling
- solid media sampling and analysis methods
- solid media sampling methods
- solid media evaluation and testing methods
- direct-push logging (in situ)
- discrete groundwater sampling
- multilevel sampling
- DNAPL presence
- chemical screening
- environmental molecular diagnostics
- microbial diagnostics
- stable isotope and environmental tracers tests
- on-site analytical techniques
In each type of subsurface terrain, there are physical features that may control the behavior of DNAPL or soluble or gaseous phases of DNAPL contaminants (Chapter 3). Each geologic parameter in the Tool Selection Worksheet helps in understanding the macroscopic and microscopic characteristics of the subsurface that affect the transport of all phases of contaminants (NAPL, dissolved, sorbed, and gaseous). Each geologic parameter informs the physical framework of the CSM that controls the hydrologic characteristics of the subsurface. Analyzing the physical framework of the subsurface against the measured hydrologic properties of flow in the subsurface helps in understanding contaminant migration and distribution and in further refining and verifying the CSM. The geology, hydrology, and chemistry should be evaluated simultaneously and interpreted collectivelyQuantitative = Q (Tool that provides compound-specific values in units of concentration based on traceable standards (e.g., μg/L, ppm, ppbv)Semi-quantitative = SQ (Tool that provides compound-specific quantitative measurements based on traceable standards but in units other than concentrations (e.g., ng or ug) or provides measurements within a range) Qualitative = QL (Tool that provides an indirect measurement (e.g. LIF and PID measurements provide a relative measure of absence or presence, but are not suitable as stand-alone tools for making remedy decisions).
Figure 4-2 describes the options offered in the downloadable Tool Selection Worksheet from which you can select a suite of characterization tools. Figure 4-2 displays examples on mousing over each box.
Figure 4-2 contains an image of the header rows of the Tool Selection Worksheet. Within the downloadable Tool Selection Worksheet, dropdown boxes allow you to choose variables in each of four categories: (1) type of investigation; (2) subsurface terrain; (3) parameter or data required; and (4) data quality. Once the selections are completed, a search will populate another sheet with tools capable of collecting the type of data described by the first set of selections. If additional date types are required, another set of variable can be selected and an additional search can be completed and added to the sheet containing the first search. The search variables for both searches are listed at the top center.
Clicking on a specific parameter, for instance lithology, links to a description of the parameter and why it should be collected during characterization of the geology in a CSM. These descriptions are available for all parameters.
In the Tool Selection Worksheet, each tool name links to more information on the tool. Descriptions of the tool, applicability of the tool, its advantages and limitation, its data quality capability, and difficulties that may be encountered when using the tool are included. Additional information is contained in the references provided at the end of each technology description table. These are linked to the full reference information.
Many of the tools are capable of collecting data in all subsurface conditions; however, some are more limited. For example, some tools cannot be used in screened or cased holes or in unsaturated conditions, and others may be able to penetrate relatively shallow depths in unconsolidated material but cannot penetrate bedrock subsurfaces without a borehole.
The downloadable Tool Selection Worksheet illustrates the applicability of each tool by shading the cell that corresponds to the tool (leftmost column) with the parameter (uppermost columns). For example, ground penetrating radar (GPR) can be used to identify lithology, lithologic contacts, and faults. The data obtained by a GPR survey can be qualitative, semi-quantitative or quantitative depending on the care taken in calibrating the tool responses to specific geologic conditions. The Tool Selection Worksheet contains 97 tools and techniques that can be used to collect and analyze site parameters.
Having established the data needs of the DNAPL site investigation, the shaded cells of applicable tools help in selecting a suite of tools capable of collecting data to assess the site parameters in the data collection objectives. For instance, Appendix B, Case Example 1 assumes that thermal treatment is a preferred approach in remediating a DNAPL source; however, proper design requires a thorough understanding of the three-dimensional variability of porosity, saturation, NAPL properties, and distribution. Some tools selected include LIF to delineate the three-dimensional distribution of coal tar NAPL accompanied by a CPT friction log to assess the variability within the vertical stratigraphy. Physical core recovery and logging is used to estimate the ranges of porosity within discrete geologic facies. This initial suite of tools is only capable of collecting the applicable data without considering data quality, availability, cost, deployment challenges, limitations, and access, among other parameters.
Quantitative = Q , tool that provides compound-specific values in units of concentration based on traceable standards (such as μg/L, ppm, or ppbv)
Semi-quantitative = SQ, tool that provides compound-specific quantitative measurements based on traceable standards but in units other than concentrations (such as ng or µg) or provides measurements within a range
Qualitative = QL, tool that provides an indirect measurement (for example, LIF and PID measurements provide a relative measure of absence or presence, but are not suitable as stand-alone tools for making remedy decisions)
The links to the Tools Descriptions (Appendix D) must be reviewed to assess the best tools for a particular site situation. References make research of specific tools easier and much quicker as a number of the tools, originally classified as applicable, can be eliminated due to site conditions, access, cost, availability, deployment challenges, or DQOs. As discussed earlier, this Tool Selection Worksheet does not select individual tools, but it does allow for the elimination of many tools depending on the data needs and investigation plan.
The data quality determination is not tool specific. Many tools can collect semi-quantitative data as well as quantitative data if care is taken to calibrate the tools with the subsurface and collect and analyze the data carefully. Many of the downhole geophysical tools have the capability to collect qualitative as well as quantitative data depending on the requirements of the investigation. Depending on the DQO, availability, accessibility, and cost, the final selection of tools may not be appropriate and alternative tools must be considered. Regardless, an adequate investigation requires that the characterization objective be met. There is no need to collect less or more data than necessary; however, the data collected must fill the data gap in the CSM.
The objective of evaluation and interpretation of site characterization data is to gain a clear understanding of past, present, and potential future environmental conditions at a site. Through the context of the CSM, data evaluation and interpretation can facilitate more informed remedial decisions for the site. Thus, only through data analysis and interpretation can the project team make decisions (for example, characterization efforts answer a stated characterization objective, or an assumption about the conditions of the subsurface are not supported by the data, and the original assumptions must be revisited). Specifically, the data should reduce the levels of uncertainty in the CSM, with respect to the data collection objectives at the site, to an acceptable level. Through integration of all of the data types (geologic, hydrologic, and chemical), collaborative data sets (USEPA 2010) can be generated. This multiple-lines-of-evidence approach enables the CSM to provide a clearer description of contaminant transport, storage, and attenuation.
Typical approaches to evaluating geologic data include preparation of cross sections, fence diagrams, or three-dimensional representations across a site. Contour maps of relevant geologic surface elevations (isopach maps) are also helpful for many sites. When evaluating the geology, consider answering the following questions:
- Is the subsurface an unconsolidated environment (sedimentary) or a consolidated environment (bedrock, fractured rock, karst)?
- What is the horizontal and vertical extent and continuity of lower-permeability facies that can act as diffusive sources under present or future conditions?
- What is the vertical heterogeneity of strata? This question is important for identifying potential facies changes and permeability contrasts that can act as either of the following:
- vertical migration barriers for DNAPL that may represent areas where pools and lateral spreading are more likely to be present in the vicinity of a source zone (even small contrasts in permeability between facies are important)
- impediments to in situ remediation approaches since groundwater fluxRate of flow of fluid, particles, or energy through a given surface. rates through permeable sediment facies with large contrasts in permeability can limit the effectiveness of remedial (for example, injection or extraction) technologies
- Are there preferential pathways in the subsurface for DNAPL flow?
- Is fracturing a significant source of permeability, and are there dual permeabilities?
- What is the extent of boundary conditions (for example, faults, lithologic contacts, fracturing, facies changes) that influence horizontal and vertical groundwater flow downgradient of a NAPL source zone?
- What are the geologic features that influence vapor transport in the unsaturated zone?
Special considerations in the evaluation and interpretation of the geologic data are as follows:
- Boring logs. Boring logs are fundamental to most subsurface investigations; however, they can be tedious and prone to errors and inconsistent judgments. Logging should be completed from the collar, or top of the boring, through the completion depth of the boring. Throughout the length of the boring, depths should be recorded for any recognizable contacts, changes in grain size, sorting, modality, cementation, rock type, mineral content and percentage, fracturing and attitude when possible, competence, lithology, crystalinity, alterations, color, porosity, moisture content, and any other relevant and noticeable change in facies. These logs not only provide direct information for interpretation of the paleoenvironment and tectonic environment of the subsurface, they are also used to index other downhole testing and data collection instruments (for example, CPT, HPT, EC, MIP, natural gamma, GPR, and resistivity – see additional examples in the Tool Selection Worksheet). Appendix A contains examples of how to use boring logs to define the depositional environment of the vertical section, and then to identify data gaps and develop future data collection objectives.
Hydrogeologic data support calculation of average linear groundwater velocity, groundwater volumetric flow, chemical travel time, mass flux, and mass discharge. These site-specific determinations must fit into the regional geologic setting to validate the hydraulic conditions against the physical factors in the subsurface. Examples of hydrogeologic characteristics relevant to a DNAPL site investigation include the following:
- horizontal and vertical hydraulic conductivity of hydrostratigraphic units relevant to the transmission of groundwater, vapor, or DNAPL flow
- spatial and temporal (for example, seasonal, production well pumping intervals) variations in the hydraulic gradients, flow rate, and direction
- minimum and maximum water table elevations
- groundwater/surface water interface and springs
- capillary zone thickness
- vertical water saturation profile in the unsaturated zone (which influences the amount of pore space available for vapor transport)
- surface cover (soils) conditions (relevant to vapor transport)
Groundwater elevation gradients may not be a reliable indicator of flow direction- The groundwater elevation gradient for an aquifer expresses the gravitational driving force supporting groundwater flow, but it is not the only determinant of groundwater flow direction. The hydraulic conductivity structures of heterogeneous, anisotropic aquifers are often not aligned with the fall line of the groundwater elevation gradient. In these cases, relatively small hydraulic conductivity contrasts can direct groundwater flow and contaminants in directions significantly off the elevation fall line” (Payne et al. 2008).
Special considerations in the evaluation and interpretation of the hydrogeologic data are as follows:
- Well construction logs. Wells constructed for monitoring may provide very little reliable data for characterization. This may be because the screen length was intentionally installed to obtain average groundwater chemical concentrations over the entire length of the screened interval; often, the screened interval is far longer than the thickness of the discrete stratigraphy or facies changes being investigated in the subsurface. The depth of the well, and in particular the screened interval, must coincide with the segment of the vertical profile being investigated. Well construction logs define these parameters prior to sample collection. Well construction logs should also document the size of the borehole, amendments used in completing the drilling and construction, composition and internal diameter of any casing or screen used, and depths of changes in the material or size of casing used to develop the well.
Evaluation of groundwater elevation data. Groundwater elevation data are often used to estimate groundwater flow direction and groundwater flow velocities (when coupled with geologic properties and hydraulic conductivity data). The use of groundwater elevation data alone, however, without incorporation of geologic structure and the spatial distribution of hydraulic conductivity, does not reliably predict groundwater flow direction, especially at small-length scales that are increasingly relevant to DNAPL remediation activities. Groundwater elevations typically vary seasonally; therefore, it may not be possible to predict long-term groundwater flow patterns based on only a few monitoring events.
Groundwater elevations also vary in response to significant precipitation or drought events, local and regional pumping, as a result of remedial activities, or changes in site conditions such as the extent of paved surfaces. The CSM should consider variations in groundwater elevation over time and the potential for these variations to influence the mass fluxes into and out of the groundwater system. Estimates of the apparent groundwater flow velocity from groundwater elevation and estimates of hydraulic conductivity are prone to error due to the inherent inaccuracy of hydraulic conductivity estimates (+/-50%) and variations in the ratio of the effective to total porosity (+/-50%) spatially within and between stratigraphic units.
- Groundwater flow and contaminant transport. An acceptable understanding of groundwater flow and DNAPL and aqueous phase contaminant transport cannot be based accurately on groundwater elevation alone. An understanding of the subsurface stratigraphy or facies changes, primary and secondary porosityThe openings or discontinuities in a rock matrix caused by breakage, fracture, or dissolution, which are further subdivided by origin as faults, joints, or karst channels (ITRC 2011b, p. 12)., permeability, and structural feature (faults, fractures, and contacts), coupled with the groundwater elevation and climatic data, is necessary to define groundwater flow and contaminant transport. Wells exhibiting similar groundwater elevation data may be drilled into different geologic units that are not hydraulically connected. There may be seasonally perched groundwater that inhibits contaminant transport, apparent confining layers may leak, and unidentified facies changes may influence flow paths and velocity, all of which may result in unexpected transport into what were assumed to be clean geologic units.
A variety of analyses may be conducted using chemical data collected during investigation of a DNAPL site:
- Whenever possible, the composition of DNAPL source zone(s), including the types of constituents and relative mass or mole fractions of each constituent in the NAPL, should be identified, to facilitate an understanding of the effective solubility of each constituent in the NAPL mixture. This understanding will then help in evaluating the applicable technologies and remediation time frame. The dominant constituents in a source are often focused upon without sufficient consideration of constituents with lower values of effective solubility. This may influence the time required for remediation or selection of remedial technologies.
- Dissolution of a multicomponent NAPL may be evident in historical concentration vs. time. Compound(s) with higher effective solubility show a decline in concentration where concentrations of compounds with lower effective solubility may increase or remain stable in concentration. These trends may help in evaluating the degree of DNAPL depletion that has occurred as a result of natural dissolution.
- Potential and confirmed NAPL source zones can be determined using the lines-of-evidence approach and NAPL indicators (USEPA 2009).
- The source strengthMass discharge at the source zone. (that is, mass discharge) downgradient of a source zone, and how this source strength has changed over time, can be evaluated. ITRC (2010) provides more information on the uses and methods for estimating mass discharge and mass flux.
- The 14-compartment model (ITRC 2011b) can be used to evaluate mass distribution in various phases and locations in the subsurface, including the following:
- delineation of diffusive sources in lower-permeability units based on an interpretation of soil and groundwater data
- delineation of groundwater plumes downgradient of NAPL or diffusive source zones
- evaluation of vapor transport as a result of one or more sources in the unsaturated zone or saturated zone
- The distribution of physical characteristics such as primary and secondary porosity, fraction of organic carbon (foc, used for soil partitioning and retardation calculations) and DNAPL properties (for example, density, interfacial tensionRepresents the force parallel to the interface of one fluid with another fluid (usually air or water), which leads to the formation of a meniscus and the development of capillary forces and a pressure difference between different fluids in the subsurface., viscosity) can be evaluated. See Appendix I for examples of foc in various geologic media.
- Biogeochemical conditions can be characterized and the biodegradability and other potential attenuation mechanisms for COCs can be evaluated (ITRC 2008). For example, dissolved methane in groundwater is an important parameter for mapping and finding evidence of natural attenuation, and specifically reduction and attenuation of DNAPL zones in plumes. This is another parameter that tends to be overlooked and not used as an early warning with respect to plume migration and source areas of highly concentrated chlorinated solvents.
Special considerations in the evaluation and interpretation of the chemical data are discussed below.
It is uncommon to conduct tracer tests at sites, unless they are in karst terrains; however, due to the historical use of multiple chemicals at different times and in different portions of sites, trace contaminants present can often be used to interpret the locations of distinct source areas and plumes. When evaluating trace contaminants, it is important to include both parent and daughter products. A simple approach for identifying potential trace contaminants is to arrange chemicals in data tables with respect to their parent-daughter relationships. Related compounds in certain wells are easily observed. Once trace contaminants have been identified, the sum of related compounds (that is, parent and associated daughter compounds) can be plotted on plan-view maps or cross-sectional diagrams. It is possible to use tracer data to differentiate among the footprints of multiple distinct plumes that are dominated by a single primary contaminant (for example, TCE).
For example, mapping geochemical parameters such as total dissolved solids and chloride can serve as chemical tracers. Reductive dechlorination could produce an increase in chloride concentration relative to background, which can be detectable in advance of the migrating dissolved-phase plume. Because the chloride increase does not represent a drinking water criterion exceedance, it has often not been considered relevant and thus not used as a means to further investigate an aquifer; however, it can be an early sign of the presence of DNAPL, and can point toward the location of an early plume migration at depth. If vertical aquifer sampling data show increases in chloride at select depth zones, this may well indicate a pooled source of DNAPL upgradient. Therefore, chloride increases should be considered when mapping in three dimensions, and as a part of the CSM. At the very least, increased chloride concentrations can identify the preferential contaminant flow paths that are important to the CSM even if the source is a site-specific activity that released chloride not associated with dechlorination. Further geochemical analyses may help delineate the two sources.
Deuterium, oxygen-18, and carbon-13 isotopes provide qualitative information on the origin of water and can be used to infer age. Radioactive isotopes can infer age by determining the rate of decay of a radioactive isotope. Common isotopes include hydrogen, (tritium) carbon-14, and chlorine. These may be useful in indirectly estimating bimodality in water sources, and could be an indication of dual permeability and fracture connectivity (Harte 2013b; Cook and Böhlke 2000; Coplen, Herczeg, and Barnes 2000).
The age of the groundwater can indicate whether it is from an ancient, potentially subsurface source or whether the aquifer is being replenished with modern water from the surface. If an aquifer is being replenished with modern water, the aquifer water is vulnerable to contamination encountered while infiltrating from above. With careful testing, the flow regime can be clarified according to the chemical age of the water. Examined along a flow path, for instance, if the age increases rapidly from one sampling point to another, movement between the two sampling locations is slow.
The overall goal of an ISC is to collect the data necessary to provide an updated, site-specific, three-dimensional CSM, sufficiently detailed at the relevant scale, to effectively and efficiently guide site environmental management. The process of developing and updating the CSM includes compiling and synthesizing existing information, identifying data gaps and uncertainties, and determining subsequent data needs. As described in Chapter 1, oversimplified characterization of subsurface conditions has led to the concept of engineering around geology; however, remedy performance track records have shown that concept to often be flawed.
The focus of a CSM may shift from characterization toward remedial technology evaluation and selection, and later, remedy optimization. Throughout the extent of a project, decisions, data needs, and personnel shift to meet the needs of particular project stages and the associated technical requirements. Continuing to refine the CSM as the project progresses requires the ongoing collection of an adequate amount of qualitative data. In the latter stages of a CSM, additional data collection is often driven by the goal of answering specific questions or reducing uncertainty of highly specific components. At all points in the CSM life cycle, the CSM is simply a hypothesis of site conditions and processes, and therefore additional quantitative data collection should continue to test the hypothesis at the appropriate levels.
For example, a CSM may indicate that most of the contaminant mass migration is occurring in one of many laterally extensive sand stringers within a large low-permeability silt and clay layer. The characterization objective for additional work would then be to measure the contaminant flux through the sand stringers, and this could be accomplished through a number of the tools presented in the Tool Selection Worksheet. In this simple example, the result is essentially binary, asking the question: “is the migration through a single stringer? The CSM may require updating based on the results and additional investigation/evaluation undertaken as needed.
A CSM is rarely composed of individual elements with weak reliance on each other. The examination of how additional quantitative information can have a material effect on the CSM as a whole, and on other individual elements, should be central to the updating process. The integration of new data into old concepts requires experienced practitioners, particularly as detailed site investigations in the middle stages of the CSM life cycle can often have far-reaching effects not readily apparent simply from the gathered data.
Source: USEPA 2015
Scenario. Tetrabromoethane (PBAtetrabromoethane) has been released from a mineral processing facility into layered silt/sand/gravel stratigraphy. The initial characterization-stage CSM indicated that the plume was contained within the property limits and that groundwater velocities were very low (Figure 4-4).
Uncertainties. Monitoring wells were originally installed with long-screened intervals (~ 10 meters). This screen length was not likely to identify geologic controls on the plume. As a result, surface water receptors could have continued to be at risk.
New data. An extensive drilling and sampling event was undertaken using sonic technologies and detailed core analysis followed by MIP for high precision placement of 1 meter well screens. Surface water sampling was also implemented.
Effects on the CSM. Additional sampling and new data identified a larger plume than recognized from historical data. The plume is now expected to reach a surface water receptor (Figure 4-5). There is a need to understand the discharge dynamics between groundwater and surface water as contaminants are not seen in surface water at levels predicted by a mixing model.
Scenario. PBA has been released from a mineral processing facility into layered silt/sand/gravel stratigraphy (same site as in previous example). The plume is primarily contained in two high-permeability sand/gravel layers within silty formation. No detectable concentrations were found below a lower clay aquitard, and DNAPL is suspected to be present in sand layers and in preferential pathways through silty layers.
Uncertainties. Neither the location of most of the DNAPL mass nor the amount of total DNAPL mass in the subsurface are known. DNAPL PBA is very dense (SG = 2.97) and is thought to have minimal lateral migration following release. The highest dissolved concentrations are in the sand/gravel layer immediately above the clay aquitard, and it is not known if DNAPL has pooled there.
New Data. Passive flux meters were deployed in a downgradient transect. The source zone is located inside a very low overhead building, and it is determined that CPT-based tools are not viable. A mini-sonic rig is used to install very-high-resolution monitoring wells (4 inch screens). A partitioning interwell tracer test (PITT) is also conducted.
Effects on the CSM. Most of the contaminant flux is through upper sand and silt layers. In addition, most groundwater flux is through a lower sand layer (Figure 4-6). The implication is that there is little DNAPL in the lower sand layer in the original source area. Figure 4-7 indicates that most of the DNAPL is present in upper regions; PBA naturally breaks down to tribromoethene quickly. The distribution of PBA in the source zone and the contaminant mass in the lower sand are the result of the plume diving to the highest-permeability layer.
Source: Johnston et al. 2013.
Source: Johnston et al. 2013.
Scenario. PBA has been released from a mineral processing facility into a layered silt/sand/gravel stratigraphy (same site as in previous examples). Most of the DNAPL mass is present in the upper regions of the source zone. Low DNAPL mass estimates (from PITT – not discussed) pointed to slow source zone pumping as the remedial approach (Figure 4-8).
Uncertainties. The mass estimate was arrived at through a number of different lines of evidence; however, uncertainty resulted in an estimated lifespan for source zone DNAPL of 3–20 years.
New Data. Detailed monitoring during pilot pumping (from multilevel wells as in the previous example, and from a single centralized extraction well) was used to calibrate source depletion models (Figure 4-9).
Effects on the CSM. It was determined that flow channeling may lead to extended remediation times under a pure pumping approach. The DNAPL mass estimate was increased from 220 kg to 258–295 kg.
Source: Johnston et al. 2013.
Source: Johnston et al. 2013.
Characterization data are commonly interpreted through the use of visual representations and analytical and numerical models. Visual representations may be two- or three-dimensional representations, usually combining one or more types of characterization data to provide an increased understanding of contaminant distribution and behavior. Whichever interpretive tool is used, the results are only as accurate as the data used to prepare them and the skill of the modeler.
Visual representations integrate different types of characterization data in a meaningful way, which can facilitate communication of complex geologic, hydrologic, and chemical concepts to both technical and nontechnical audiences (see USEPA training, “Use of Geostatistical 3-D Data Visualization/Analysis in Superfund Remedial Action investigations”). These visual representations range from simple two-dimensional cross section with lithologic layers and groundwater elevations models to web-based dynamic geospatial-based three-dimensional models incorporating hydrological conditions and contaminant mass flux information (see example of an environmental visualization of DNAPL migration into a regional aquifer from a drum disposal area).
Although many public domain/freeware packages are available, they may be limited in types of data that can be analyzed, extent of visualization, quality of graphics, graphic output format options, and breadth of statistical analyses. Commercial packages vary widely in price, features, and technical support services. Data interpolation algorithms also vary considerably, and the same data set could be rendered differently by different software packages.
Visualization software may include some or all of the following features:
- time series
- contouring with time series
- two- and three-dimensional playback loops of specified time frames for the following parameters:
- groundwater flow direction
- constituent concentration
- groundwater velocity
- mass flux and discharge
- atmospheric pressure
- transect slices
- statistical controls
- importation and integration of data with geo-referenced maps
The evaluation of high-resolution data sets and the integration of newly developed data with lower-resolution legacy site data is particulary challenging. The use of statistical data evaluation tools to interpret data requires a high-resolution data set in both the vertical and horizontal dimensions. Many tools develop high-resolution vertical data sets, but it is not always cost effective to generate high-resolution horizontal data sets. Plumes are rarely characterized using a grid approach; more commonly, a transect approach is used. When using transects to characterize sites, it is possible to accurately interpret data within a single transect, but often it is not possible to interpolate data between transects with a high degree of accuracy.
The use of statistical data evaluation tools to interpret data may be considered a relatively objective means of interpreting site data; however, it is possible to adjust the manner in which the data are interpreted and presented. To do so in a defensible fashion requires some knowledge of geostatistics. Alternatively, a statistical tool can be used to develop a visually appealing rendering of site data that may not be statistically justified, but that is consistent with a subjective interpretation of a collaborative data set from the site.
Collaborative data sets are generated when multiple tools are used at a single site. This is commonly the case when both historical and newly generated data are used to develop a rigorous CSM. Collaborative data sets are also developed at sites where a variety of tools are used (for example, a combination of qualitative screening tools and quantitative confirmation tools). In such cases, it may not be possible to use statistical data evaluation tools to support data interpretation or visualization, and it may be necessary to subjectively interpret the various types of data and present them in plan or cross-sectional views using traditional data presentation tools (for example, CAD or GIS).
Various analytical and numerical models are available to help interpret data (past and present) to refine a CSM. Analytical models can range from simple equations to more complex equations that are evaluated using spreadsheet tools, and they can be used by a wide range of practitioners. Numerical models are typically used by practitioners with a more specialized background and generally involve more sophisticated input data sets.
Analytical and numerical models provide a simplified representation of complex conditions that occur in the field. While uncertainty is involved with applications of these models, tangible benefits can be gained by using these tools, such as an enhanced understanding of physical and chemical conditions in the subsurface, range of remediation time frames that can be expected, and range of behaviors to expect during or after implementation of a remediation system.
Analytical or numerical models can be used to estimate the following areas of analysis:
- NAPL source zone delineation
- distribution of mass contained in a NAPL or a diffusive (low-permeability) source zone
- current source strength (that is, mass discharge leaving a source zone) over time, and the past and potential future rates of decline in source strength
- attenuation rates downgradient of a NAPL source zone
- rate of enhanced NAPL source strength depletion associated with remediation
- time-varying ratios of solute concentrations adjacent to a source zone for multicomponent NAPLs as a result of natural or enhanced dissolution
- number of orders of magnitude reduction in source strength expected to occur for a given in situ remediation technology
- time frame for partial or more complete NAPL remediation
- time frame for back-diffusion to cease causing exceedances of groundwater cleanup criteria
- plume behavior in response to a reduction in source strength
SourceDK is a planning-level screening model for estimating groundwater remediation time frames and the uncertainties associated with the estimated time frame (Farhat et al. 2012). In this document, remediation time frame is the time required for the high-concentration source zones at a site to reach a certain target concentration. It is public-domain software developed for the Air Force Center for Engineering and the Environment by GSI Environmental, Inc. (GSI). SourceDK consists of three tiers, as discussed below.
- Tier 1 – Extrapolation. Source zones that have extended records of DNAPL site characterization data vs. time can be analyzed using the Tier 1 extrapolation tool. With this tool, log concentration vs. time is plotted and then extrapolated to estimate how long it will take to achieve a cleanup goal, assuming the current trend continues. This tool also provides the 90% and 95% confidence level in the estimate of the time to achieve the cleanup goal.
- Tier 2 – Box Model. In this tier, the simple box model developed for the BIOSCREEN model has been enhanced to include source mass estimation software and other features. The box model estimates source attenuation from a source mass estimate, mass flux of constituents leaving the source zone, and biodegradation processes in the source zone. The uncertainty in the source lifetime estimate is also provided.
- Tier 3 – Process Models. This tier employs more detailed fundamental process-based equations to determine the time and amount of naturally flowing groundwater required to flush out dissolved-phase and NAPL-dominated constituents from the source zone.
REMChlor (Remediation Evaluation Model for Chlorinated Solvents) is a contaminant source model based on a power function relationship between source mass and source discharge, and it can consider partial source remediation at any time after the initial release. The source model serves as a time-dependent mass flux boundary condition to the analytical plume model, where flow is assumed to be one-dimensional. The plume model simulates first-order sequential decay and production of several species. The decay rates and parent-daughter yield coefficients are variable functions of time and distance. This approach allows for flexible simulation of enhanced plume degradation that may be temporary in time and limited in space, and which may have different effects on different contaminant species in the decay chain. Cancer risks posed by carcinogenic species in the plume are calculated assuming that the contaminated water is used in a house for drinking, bathing, and other household uses.
PREMChlor is a probabilistic version of the REMChlor model. DNAPL site characterization data can be used to develop a potential glide path for either monitored natural attenuation or for remediation projects that can be compared against future DNAPL site characterization data. Sites where the future data compare well against the glide path have adequate CSMs, while sites where the future data deviate against the glide path may need review and adjustment of the current CSM.
The BIOBALANCE Toolkit is a mass-balance-based modeling/data analysis system that allows the user to perform the following tasks:
- assess the stability of plumes originating from both vadose and submerged source zones
- evaluate plume stability (time and size) using an iterative approach that solves equations in BIOCHLOR and documents the relative contributions of various attenuation mechanisms
- examine the sustainability of anaerobic degradation processes based on an approximate balance of electron acceptor and electron donor
- provide an overarching accounting of mass balanceQuantitative estimation of the mass loading to the dissolved plume from various sources, as well as the mass transport, phase transfer, degradation, and attenuation capacity for the dissolved plume. results from the various modules in the form of a summary report
BIOBALANCE can help DNAPL site managers better interpret and understand their DNAPL site characterization data using one or more of the following modules: Source Module, Competition Module, Donor Module, and Plume Module.
The Monitoring and Remediation Optimization System (MAROS) methodology allows users to apply statistical techniques to existing site characterization data to suggest if and where improvements to current monitoring system are needed. These improvements include changes to the monitoring frequency, parameters evaluated, and the number and location of groundwater monitoring wells. The software uses both statistical plume analyses (parametric and nonparametric trend analysis) developed by GSI, and allows users to enter external plume information (empirical or modeling results) for the site. These analyses allow recommendations on future sampling frequency, location, and density to optimize the current site monitoring network while maintaining adequate delineation of the plume as well as knowledge of the plume state over time to meet future site-specific compliance monitoringThe collection of data which, when analyzed, can allow for the evaluation of the contaminated media against standards such as soil and or water quality regulatory standards, risk-based standards, or remedial action objectives. goals.
Recently, there has been increased emphasis on the use of mass flux and mass discharge concepts for DNAPL site characterization (ITRC 2010, ITRC 2011b). The Mass Flux Toolkit is an easy-to-use, free software tool that compares different mass flux/mass discharge approaches, calculates mass discharge from transect data, and applies mass discharge to manage groundwater plumes. The Mass Flux Toolkit allows users to calculate the mass discharge across one or more transects of a plume and plot mass discharge versus distance to show the effect of remediation and effect of natural attenuation processes. Three types of uncertainty analysis are included: uncertainty range due to interpolation; uncertainty due to the variability in the input data using a Monte Carlo-like analysis; and an uncertainty analysis that shows the dependency of the mass discharge estimate on data from each monitoring point.
The Matrix Diffusion Toolkit is an easy-to-use, comprehensive, free software tool that can assist in effectively and efficiently estimating the effects of matrix diffusion at a site and then transferring the results to stakeholders. The software can also assist project managers in determining if remediation goals are achievable in the short term. It can be applied to most sites with heterogeneity in the subsurface, with DNAPL, and where persistent groundwater contaminant concentrations have been observed after source-zone remediation.
The Matrix Diffusion Toolkit is a valuable resource for the development of CSMs, supporting site characterization efforts, planning remedial designs, and determining if matrix diffusion will affect remediation goals for contaminated groundwater sites. The software can assist site personnel in updating or creating a more accurate CSM, which will enable them to determine whether matrix diffusion processes are significant enough to cause rebounding groundwater concentrations of downgradient plume concentrations above remediation goals after plume remediation or isolation is complete. Having this information available before a remedy is implemented can assist site decision makers in selecting more appropriate remedies and effectively and efficiently addressing the potential issues of matrix diffusion with regulators. Furthermore, addressing extended remediation time frames caused by matrix diffusion leads to project cost savings.
Publication Date: April 2015