All right that concludes our introduction. So now I will hand things over to Rick to get started.
For those of you who may be unfamiliar with REGENESIS, our company was founded in 1994. We specialize in soil and groundwater remediation, as well vapor intrusion mitigation, with a broad range of technologies. Many people know us for Oxygen Release Compound or ORC Advanced, that simulate aerobic bioremediation of petroleum hydrocarbons. Many of you have worked with us on enhanced reductive dechlorination with either HRC or Hydrogen Release Compound, or 3D microemulsion for anaerobic bioemediation. We have a tremendous amount of experience with in situ chemical oxidation, both for soil and ground water, using our all-in-one catalyzed persulfate, known as PersulfOx, as well as RegenOx, two powerful oxiDanets that work on a broad range of contaminants.
In the last year or two, you’ve probably heard a lot about our new technology class using liquid activated carbon, PlumeStop. PlumeStop is a platform technology that absorbs contaminants to reduce its concentration to very low levels in days to weeks. We call it a platform technology because it really creates a table where contaminants can rapidly absorb and microbes can flourish to lead to permanent destruction of those contaminants in situ.
Many of you may not be aware that we have a division called Land Science Technologies that focuses on Vapor Intrusion Mitigation, both in new construction with Geo-Seal, and existing buildings with Retro-Coat. No one on this call is going to be surprised when I say there’s no silver bullet technology that works for every contaminant and every cleanup goal. We’re certainly in that class. However, REGENESIS does take pride in the fact that we built a broad range of solutions that can be implemented to help save our clients’ time and money.
One of the common misperceptions of REGENESIS, especially with people who’ve not worked with us before, is they might describe it as a product vendor or a product manufacturer. And although it’s true we do manufacture products for the environmental industry, the real power and value of our company is in the tremendous amount of experience we’ve gained with over 24,000 applications worldwide. Today we’re going to share some of that experience.
For those of you who don’t know, we have a dedicated team of engineers, geologists and scientists around the world, and this is really our secret sauce. Those folks review, evaluate and provide remedial designs to consulting firms every day. This visibility in the many sites, many different contaminants, many different hydro-geologic conditions, give us a unique window in the soil and ground water remediation design. Many people are surprised to know we do over 125 remedial designs per month. That provides tremendous visibility for us. Not only the types of contaminants, but the types of cleanup goals and settings that engineering consulting firms are facing every day.
If you have sites where you’re considering remedial solutions, we’d appreciate an opportunity to review your site and make recommendations, and ultimately to provide costs so you can compare that to other alternatives. Our ongoing commitment is always to provide an honest assessment to make recommendations that we feel confident will meet your goals. And in today’s webinar, we’re actually going to show scenarios where our technologies were not selected. We’ll get into how those details work.
I’d like to introduce now Craig Sandefur. Craig and I have worked together for over 17 years at REGENESIS. Craig has led our technical efforts in many different areas. In the mid-90s, he was one of the first practitioners to inject solid peroxygen, the slow release oxygen, into the subsurface via direct push. He’s been instrumental in the development of our earliest design software and all the iterations since the mid-90s. For the past five years, he’s been very focused on product development and fielding application of our newest technology PlumeStop. Today’s webinar will share some of those lessons learned, and perhaps best practices that we’ve developed through that experience. With that, I’d love to turn it over to Craig.
Craig: Thank you, Rick. Appreciate everybody’s time today. I’d like to acknowledge a couple of guys here that Rick’s already mentioned, Chris Lee and Steve Barnes. They’ve helped me tremendously in development of content for this webinar and I appreciate that from them.
A road map slide here to take you through what I intend to discuss today. I want to offer a few personal observations, and then what remediation practitioners, in my opinion, really need to know, why I think sedimentology rules and some tools also to better understand your geologic conditions at your sites. Finally, I’m going to wrap up with an analysis of our design verification program that REGENESIS has performed over the past few years and offer a short case study to wrap things up. So some observations from about 20 years of in situ design in application and troubleshooting, because I contend that at troubleshooting phase, you don’t really learn much, you learn from failures and we try to minimize them going forward.
So, never confuse precision with accuracy. This is the mantra that I’ve developed over the last few years and, in my opinion, precision is over-related. Precision tends to create a sense that we know more than we do about our projects, and that, simply put, a high precision…you may define high precision as a great analytical result. But I’m sort of thinking about precision in data density. That could be in terms of groundwater with quarterly monitoring and sampling that stretches out over 10 years, at 4 events a year. That might extend into soil in the terms of high data density and frequency of sampling in the source areas.
I would contend that being precise and inaccurate in precision is still inaccurate. And that as a remedial designer myself, I would prefer to see remedial design data might have higher levels of accuracy with lower precision, and that is, I think there’s a disconnect in general between analysis and characterizations that have occurred in the source areas as opposed to in the mid and distal plume. And the high density in data frequency that we see collected there is extrapolated into the mid and distal plume and creates, maybe, misperceptions. So slow data density in the distal and mid plume needs to be more rifle shot and in the right zone. And that’s a take-home that I’ve learned after looking at lots and lots of data sets.
So what do you, as a remedial practitioner, really need to know? Well, in my opinion, that you really need to understand the organization and position of contaminant storage units, as well as the transport units. Now, in today’s webinar, I’m going to define storage units as the fine grain units, and the coarse grain units would be defined as the transport units. It’s really critical to understand the vertical or lateral relationships between these storage and transport units. And I contend that as a remedial designer, what I really care about is a sand contact, that is, how much is present and where is it located, because that helps define the plumbing, if you will, of the plume.
So I also propose and believe strongly, I think it’s supported by empirical data, that sedimentary processes control the relationships between fine and coarse grain units. It’s critical to understand, if you want to understand mass storage and distribution, that you understand this relationship and that plume shapes, in essence, are the result of the interplay between fine and coarse grain unit organization and relationships.
Therefore, distribution, which is flowing through these soil matrices, is controlled by the soil type and those positional relationships. So if you can determine vertical or lateral relationships between these low and hydraulic conductivity zones, you’re going a long ways into uncovering a lot of what’s happening in your plume. And that remediation, as we all know, is really site specific. And it’s really based on the specific characteristics of your site and of the geology, if you will, in the saturated zone of your site. And it’s generally unique.
I’d like to direct you before I move to the next slide, which I did. The slide to your right there shows a clay that was 10 to the -6 clay low permeability. And if you take into consideration the deformation of the direct push of core that was collected, you can see that it was most likely approximately a 1 inch to 3/4 of an inch thick sand zone, clean sand sandwiched between 2 10 to the -6 clay zones. And this sample was actually collected somewhere between 100 to 200 feet down gradient of the source area. And, in fact, was the control feature for the entire plume at this site. So understanding where the nature and extent of these transport zones becomes a very, very critical part to remediating the body of these large plumes.
So sedimentary processes actually control these relationships. So if we go back to the fine coarse grain units and its distribution, sediments are deposited in an organized fashion. Although they may appear rather random, they aren’t. They’re organized in very specific and reproducible ways. And it’s really based on the environment which they were deposited. And this is founded in fundamental geologic principles, that the oil industry has used with great success for over a hundred years. So folks like Rick Kramer, Mike Schultz, Fred Payne, Joe Clinton, these folks at AComm and Arcadis have been championing this notion of depositional environment and understanding the plume in context of the deposits themselves and the depositional environment in which they were in place.
Now that doesn’t always mean you have that luxury for a smaller site. But you need to be aware of it. Lack of awareness creates a lot of error in our thinking. But they’re very organized, as though it seems chaotic in cross-sections and in solid or continuous core, they are rarely non-random. So the first principle of this and the guiding principle of this is that the binding coarse grain units are organized based on the velocity of the carrier fluid. That’s most times water but sometimes air.
So if you think about it this way, the more energy the depositional water or energy the water had during deposition, the coarser the grain of the soil. And the finer grain sediments would be in a low energy water environments.
So to untangle these relationships, you have a host of tools at your disposal. You have high-res tools which are very appropriate and very helpful. And these include MIP, hydraulic profiling tools, electrical conductivity, CPT, and those kinds of techniques.
I’m really not going to discuss those today. I’m really going to focus in on some of the lower resolution methods that are very available to you and very helpful as well, and can get you part or most of the way there.
Continuous core soil logging, it’s a good old-fashioned method that when it’s done right and properly with the right parameters called out, it’s very powerful. So using geologically based descriptions rather than engineering based descriptions will get you to remedial designs that are better, and that’s because you’re identifying the percent sand, the grain size, and it’s sorting. Other relative characteristics are the characteristics that mean the most to us when we’re actually doing remediation.
So at REGENESIS…
Dane: Hey, Craig. I’d like to interject just quickly there. Often, one of the objections or questions we get from consultants and end users is a design verification sounds like a fancy word for renditional site characterization. And we’ve already characterized the site. We’ve defined the vertical extent, the aerial extent. And I know that your next few slides are going to get into it, but could you maybe just talk briefly on the fundamental differences between additional site characterization for delineating a plume and what design verification is all about.
Craig: Right. Well, essentially, and I’ll cover it in the next few slides as you mentioned, characterization is regulatory driven for the most part and it really is focused in on risk pathways, liability issues, defining the vertical or lateral extent, and the plume boundaries. Those are all regulatory requirements that are in law that has to be done. And that’s a slight…well, a significant difference between what we as remedial designers need to create success, and that includes things like where’s the storage units, where’s the mast, and how they’re being transported.
Those are the things that we’re targeting with design verification. And so if we segue into the next slide, what is design verification? Well, it’s really in the pre-application phase, a field verification of some of the remedial design assumptions that we have made going into the project. We’ve gathered this data. The consultant has gathered the data and they’ve synthesized it. They’ve printed it on a design input form, sent it in to our remediation design engineers. And they are set of assumptions based on the data that they have today. I contend that that data is very reasonable and it’s based on the data that was collected for a different set of reasons than why design verification is performed.
These are high density identifications of contaminant transport units for the most part. I believe in those cases, consultants have already done a great job by identifying where mass is most of the time. But once you get to the distal and the mid-plume, things get pretty fuzzy. And down in those sections, the mass isn’t quite understood as well and its distribution certainly isn’t. So our objective is to improve that by doing high density identification of these mass transport units, resulting in improved reagent and placement accuracy, but also really focused on targeting or intercepting those mass flux cells that really matter in the plume body itself.
This goes back to Rick’s first question, really is what it’s not. Lateral vertical step, it’s certainly not doing that. We’re not defining your plume boundaries over again. We’re not defining your sense of receptor pathway over again, and we don’t really focus on liability and risk. What we’re focusing on is, where is the mass, how is it being transported, how can we intercept that mass most effectively? That will make the material difference in the size of your plume in the shortest period of time and result in remedial success.
So why does design verification generally improve your remedial outcomes? Well, because it’s focused on identifying the position of this mass, and it’s also interested in where the high mass flux stones are. So the emphasis is on identification of these principal impacted units and resulting in greater reagent contaminant contact. The bottom line, it’s focused where it needs to be and it’s contacting it in the most efficient manner.
So, essentially, design verification, when that data is fed back to REGENESIS design engineers, it really helps us identify technical blind spots. And what do I mean by that? Well, I’m defining today, a technical blind spot is a previously unidentified variable that’s present at your site, that I mean is material, something that would make a big difference in the design itself. And it…by doing this, it helps refine design assumptions and it helps with reagent selection. It helps us calibrate the remedial reagent design itself, that is, what’s the contaminant mass that we know now in those mass flux zones, for instance, in the mid plume? Is the reagent volume and mass properly dialed up for that?
It also answers a question, if it is properly dialed up from a mass to mass ratio, or are geometrically, let’s just say, can we actually fit that reagent in the volume that we’ve got as our target treatment zone? So, essentially, we’re calibrating the target treatment zones’ accommodation rates and volumes by doing part of this design verification and identifying hydraulic limitations.
So what are some of the critical components? Now there’s about five or six, but I’m really going to talk about three really critical pieces, and essentially really cue out of these three the continuous core soil logging set, which is incredibly important. From that, we actually select samples to run in a lab. And then we perform a clear water injection test. And in the next few slides I will talk about why these things are important, but from a soil logging standpoint, I’m going to drill in to…you have the standard logging techniques and the parameters that you look for. But I’m going to take that and I’m going to rifle shot it down into soil settling tubes and why I think they’re important.
A soil settling tube, if you have not used one before, is a field technique that is very semi-quantifiable]. And it’s really meaningful data that performed or collected by a trained field geologist. It’s more of a visual determination of particle size and percentage, so sand and clay. If we go back to…think back on the previous slide where we talked about where’s the sand, being one of the most critical things in design and remedial success? How much of it is sand? What’s the size of it? How well sorted or fully sorted is it?
These samples lend themselves to a high-density collection. You can gather them at one-foot intervals very rapidly and get a lot of good data out of it. So they’re simple and they’re rapid, and they provide reliable data. They decrease subjectivity and some of the critical elements that geologists sort out and tend to have trouble with, that would be the silty sands with clays. How much clay is really present? I find myself and my own logging skills saying there’s way more clay than there actually is. There’s a lot more silt and fine sand. And that fine sand is masked by the silt. So if you don’t do these things, you don’t really understand how much is really fine grained and how much is really coarse grained. I’ve been fooled myself so I am guilty as charged.
So the next aspect that I really want to drill into and help you understand is the clear water injection task. But this really is a documentation of acceptance rates and volumes. But it really is a lot…it’s very subtle. It has a lot of importance to what we try to do in terms of creating success. And that is, in the target treatment zone, we’re looking at acceptance rates and volumes and in a quantifiable way. We’re using pressure gauges and flow meters to really log carefully at each vertical interval, typically direct push, but also it could be done in wells, how much reagent is being delivered over a set period of time and at what PSI. And it really assists in the deep, direct…several ways.
In direct push, for instance, it might completely change whether you’re using a top-down or a bottom-up. And in injection wells, it really helps us screen those remedial injection wells better with more confidence as well. And only put the screens where you need it, not where you don’t.
Final take home on this is that it’s my experience in looking at lot of sites that the data we collect from this step generally differs considerably from the estimated volumes that we derive from the hydraulic activities we were given from our consulting partners.
So you’ve listened to this and you go like, “Yeah, all right. I try…you’re an expert or you’re 20 years in and you have all this experience. But give me some data to really convince me.”
So we dove into 24 sites that we have, and this is growing each month, but this was a set we had within the last few months that had enough data density and had enough robustness to it that we felt like would really be meaningful as part of a cohort. We’re going to continue this study because I think it’s going to have impact on the whole industry and how we do business in terms of injecting reagents in situ.
So the design approach is where 33% of source area and 67% either in the mid or distal part of the plume. Like contaminant type, 35% of the sites that we evaluated where petroleum hydrocarbons, 61 % were chlorinated, 4% were mixed or commingle plumes. Generally, the sites were evenly split between fine grain, clays and silts, and coarse grain, dominantly sand and gravel.
Here’s the technical ‘rubber meets the road’ folks, technical blind spots. Once again, this is something that you didn’t realize was on your site. It has a material effect on the outcomes of your remediation. Hydro-geologic conditions. 46% of the time, we found significant differences in the hydro-geologic conditions of the site compared to what we thought it was prior. 25% of the time, we had lower rates of application and smaller ROIs.
Well, I think everybody can understand that if you can’t get material in at the same ROI, you’re going to have to adjust the number of points to still have the same level of coverage, otherwise. So a material effect. If you would walk in, went out to your site, applied it as we had designed it originally, you’d probably have dead spots where we didn’t have reagent coverage. Unidentified contaminant transport zones, mass flux units, unidentified mass flux units, that 21% of the time, we’re finding these black mass flux units that weren’t previously understood to be present. Materially affects remedial outcomes and design. 18% of time we had thicker contamination zones. I think that’s pretty easy to see that that would adversely affect results if you left it at your original design.
Finally, higher contaminant concentrations, another 18%. That outstanding amount of data that indicate that we’re missing the mark on how much mass is really present in these various parts of the plume, almost 20% of the time. So my take home is, if you missed any one or two of these by a significant amount, your project’s at risk for non-performance or certainly under-performance.
So this is a simple pie chart. And this pie chart is what the design verification results have led to in terms of design changes. So 62% of the time, we have made some type of change on a site. 38% of the time, there’s no change. 62% of the time, we’re making a change. Now 35% of the time, there’s few changes. That might be…we just went bottom-up versus top-down or maybe changed the reagent solution percentage somewhat. But 8% of the time we had moderate changes, so that might be lopping the foot off the bottom of the design of target treatment zone or it might be some combination of two or three minor adjustments. But another 8% of the time, we’re changing significantly. That’s things like moving from a direct push method to injection wells, or changing the amount of reagent because we had higher concentrations than we thought, or some combination of a multiple two or three issues coming up.
And then finally I’m going to…what was mind boggling to me is that over 10% of the time, we canceled the application because we did not think we would be appropriate for that site. We didn’t think we could either accomplish the goal or because it was maybe three-phase was present, or there was some other remedial objectives, something like hydraulic conductivity or accommodation that wouldn’t allow us to apply the reagent and the kinds of ROIs that we needed.
So that avoiDanece cost is significant, and I think anyone that is doing remediation would want to know about that ahead of time and avoid those costs.
Dane: Craig, I think this slide is really, really powerful, and you and I both know non-performance of remedial agents, remedial designs, whether it’s missing the target treatment zone or missing the mass estimates, at the end of the day, these types of results really help us to hone in and determine our success.
Dane: You mentioned you define these changes…some people on the call that also equates to higher cost. One of the analysis we performed is how often do we have to change costs? And on the sites that are in this data set, only 4% of the time, which equates out for one site, do the costs increase. Only 4%, and only one site, do the site costs decrease.
So a big chunk of those, there were really no change in the overall cost that we initially proposed. Back to your point, we changed injection treatment intervals. Instead of injecting from 10 to 20 feet, we said, “Hey, let’s focus from 12 to 16 feet,” or like you said, bottom-up injection or top down. So when we say changes, it’s really just alterations and modifications to the design so that we can ensure we get better results.
The other part for me that’s really important is the last thing we need as a company and consultants need are non-performing sites. And this 11%, when we say cancelled, it doesn’t mean that remediation didn’t move forward. I know Craig has an example that say, where we had proposed, I believe it was in situ chemical oxidation. And there was just way too much mass to be effective. And as a result, we went to the consultant and said, “Look, we think there’s a really high risk of non-performance based on the design verification results. Honestly, our recommendation would be hop, spot, dig and haul, get a bulk of that mass. I don’t know if you’d like to add anything to that.
Craig: It turned out that was an interesting site in the sense that it pushed a lot finer grain than we thought it was and we had hydraulic limitations and mass that was pretty high. And at the end of the day, on two fronts, it just seemed like it was the wrong course of action. And by that, we avoided mobilization, the cost of the product, a lot of headaches in time and effort and money just by avoiding the whole application program itself. And it’s just an early warning indicator of, “You’re not going to do well.” So it’s very powerful.
So I’m going to transition into the final set of slides. And this is a case study in Nebraska. It’s a UST site, so it’s higher site. And the remedial strategy that we developed was direct push injection program with visco in the source area mid-plume and in a grid format. If you look, there’s a big orange band across the center of the plume. That’s a utility corridor that we weren’t able to put anything in that zone, but just think of…just go on the south and lower left, if you will, and on the upper right we applied a plume stop plus an electron acceptor ORC Advanced.
So the bottom left would have been PersulfOx, and the upper right would be PlumeStop plus electron donor. So design verification objectives were very clear. We had enough information from the existing data to know we had sandy units that were most likely the transport units. But what we specifically needed to know was the vertical and lateral extent of these flux zones. And we needed to confirm what mass was present in those zones as well as the accommodation rates that we could apply at that in those zones without stressing the aquifer. So we performed detailed soil examination. We did our geologically-based logging and which encompasses the standard procedures that we all know.
But what we really focused in on was the soil types and using the settling tubes. We collected these at fairly high density within the zones that we knew the mass flex zones existed. We collected at one foot vertical intervals and we clearly defined those high percent sand units. We identified the high flux units, the mass flux units. And by the way, we also identify elevated TPH downgrading in the distal section of the plume.
So using this data, we were able to refine the plumes’ boundaries if you will, and optimized remediation for that site. And essentially this is a cartoon cross-section, but the high PTH zones are in the area in red and you can see where we applied material based on mass and on soil type. So it wasn’t always just put it in the sandy zone. We’re also focused on where the mass is.
So we vary the reagent solution percentages in these soil condition phased clay sand. We changed the percent solution so we could accommodate that finer grain material. The injection test really clearly spoke to that and helped us with those modifications. We increased our target treatment zones’ vertical interval. And now it’s based those geologic core logs doing continuous cores and then submitting those cores for analysis where we needed those select soil samples. Conversely, we were able to decrease the plume’s lateral extent by the same methods using continuous core logs and contaminant analysis from select samples. We’re able to reduce that width, if you will, of the plume.
So what we did with an optimization program for that particular plume, and we redistributed the original remedial solution mass by a site-specific optimized program. And we redefined the target treatment zone footprint. Now, in the source area, we didn’t change it because the source area was the source area. But in the mid-plume, we reduced the zone by about 20%, the footprint, and about 25% in the distal plume. So the remedial solutions that we applied, the quantities that were reallocated based on the contaminant mass, so where the contaminant was high is where we focused the vast majority of the remedial reagent. And then, once again, we modified the remedial reagent solution percentages to focus on those transport units. So if they were a little finer grained, we didn’t put quite as much volume and kept our pressures low so there’s no fracking. This resulted in improved application method, not only the method, but the efficiency of this contact. So, essentially, the reallocation of materials is mostly…you might consider that just as a site-specific optimization program.
So here’s some results. This is a downgraded monitoring well and if you notice, the top line is TPH. The bottom line in blue is benzene. In the down gradient well, the well, it dropped from 3,000 PPM and TPH to below detection limit, somewhere around 95, 90 days. And that probably a hundred PPB or so of benzene was reduced at somewhere around day 55 to below. And I just checked back on this project, rounded it at almost 200 days of data and we are still below detection limit for benzene and for TPH in this well.
So some conclusions. Depositional processes exert a significant control on contaminant distribution. Depositional processes are predictable and they are non-random, and design verification, study programs, etc. provide the remedial insights necessary for these processes to be understood and the associated contaminant mass remediated. So overall, design verification improves predictability. It improves your implementation time, but also efficiency because now you’re not placing remedial reagents where you don’t need it. And you’re identifying technical blind spots or problems. And I contend this is one of the major features. And finally, you’re improving the overall design, and ultimately, your remedial outcomes.
And with that, I’ll take any questions.
Dane: Great, thanks Craig. We certainly appreciate that. One comment I’d like to make, I think one of the most powerful things in your presentation was showing that case study, and it really contrasts how we’ve done remedial design and not just REGENESIS, but as an industry historically. When we target treatment intervals with whatever reagent has been selected, it’s often uniformly distributed over that target treatment interval. So if ground waters’ at 15 feet, in the example you showed it was around 12 or 13, went down to 25 feet, the remedial designer would inject maybe bottom-up or top-down and try to distribute that same equal amount of reagent across that zone. And that’s really been a big change for us, being able to target the bulk of the reagent where the bulk of the mass exists.
So we’ve got several questions and I’ll give you an opportunity to speak, Craig. I just want to let people know we do have several questions coming in. I really encourage you to use that feature. I think this can be probably the most webinar, the answer to your questions. Craig, I’ve got one that’s lined up here and I’ll probably take and then I’m going to hit you with a couple of technical ones. But before I do that, did you have a comment that you wanted to add?
Craig: I just wanted to emphasize the notion that I think the days of 10 pounds of vertical foot across the entire screen interval of a well, are really going to be closing down as we understand these principles more thoroughly and as people take on the design verification step. It’s an investment and a success in my opinion.
Dane: Great. Well, I’ve got the first question. I’m going to take this one. What is the typical timing for design verification? How far in advance should it be done before the field application starts?
This is a great question. Typically or ideally, it would be at least six weeks out. That doesn’t always happen. Sometimes it’s part of the field application, but ideally for us, to be able to collect this information and make the necessary changes to evaluate the data and provide the report, we’d recommend at least four to six weeks.
To build on that, people often ask, how long does it take? What are the costs associated with it? It really depends on the site and the size or the scope of the remediation. In terms of cost, it varies. In the data that we presented, it was somewhere between 3% to maybe 8% of the total remedial project cost, and, on average, it was in the $6,000 to $8,000 range for the data to collect that. I’ll tell you, its money very well spent. If you’re one of those 10% of sites where we may change the whole remedial design, I guarantee you’re going to save money on that. But just in terms of improved remedial performance, we just haven’t seen a site we’ve collected where design verification didn’t help us in some way.
All right. So the next question I have here, and Craig, this will be for you. It’s a question related to clear water injection tests. The question is, what volume of water is injected at each interval, and really what’s the total volume? And I’m assuming they mean that rule of thumb.
Craig: You know, it’s really reagent-specific and the clear water injection testing will be based off the reagent. So if it was something like PlumeStop, it would be different than it would be if it was something like RegenOx or PersolfOx, and they’re all high volume where they’re very critical as in the high-volume reagents. And, essentially, what we try to do from a design verification standpoint is to use volumes that represent, if we’re using direct push or even injection wells, represents the entire volume that we would theoretically apply. If it’s a nonreactive material, we probably go one to one. If it’s reactive materials, we’d probably go 1.3 to 1, gallon per gallon, just to make sure we have a little bit of a wiggle room there and make sure the aquifer can really accommodate that volume.
Rick: Okay, I’ll build on that. The second question, I’ll take this one. Is there any problems with getting regulatory approval to do a clear water injection test?
So we’re injecting tap water at these sites. Were not re-injecting groundwater so, to date, we have not had any issues in the states where we’ve performed this in terms of getting regulatory approval for that injection.
Next question is related to something that I’m passionate about, which is vertical treatment intervals and how do we ensure reagents. It’s a question related to direct push. How do you decide when you’re going to do bottom-up versus top-down?
Craig: Rick, it’s…to the group I mean, it’s really based on hydraulic conductivity. And if, let’s just say, you have a sandy unit deep and you have more fine-grained units up above, let’s say, high silt still clay content, if you start at the bottom-down and you have a stick interval, let’s say even 10 feet, and this can happen even over a 4-foot or a 2-foot zone, if you have large hydraulic conductivity ranges, going from the more transmissive to the less transmissive, it almost ensures you’re going to put most of it in the higher transmissive zone because of the resistance that’s required to move fluids into the lower hydraulic conductivity zones. And as you know, fluids take the path of least resistance, so that would be a top-down indicator.
The reverse of that would be the same. I would do bottom-up if I tended to have more coarse-grained material higher in the section and finer-grained materials in the lower section. I’m a big fan of shorter intervals is better. It controls more. It gives you more control on where you’re putting reagents. So the longer these treatment intervals are, the less control you have over where you’re putting reagents, so that’s just a given.
Rick: Okay. We’re getting lots of great questions. This one, I wish I could pose to the group. It’s a good one, but you have to put your consultant hat on here for a second, Craig. So their comment: is it difficult to convince clients, and by client I mean the end user, the payer, to install more boring wells to characterize the hydro-geologic conditions? And I think their question is what’s your basis? How do you provide a technical rationale to the payers, say, “Look, these tests are worth it.”?
Craig: Well, I mean, I think they’re fundamentally based in the notion that you’re avoiding application of reagents…I mean, I think it goes to that where we talked about the characterizations done today are for different set of reasons than why you’re doing remedial investigations. You’re trying to find mass in the trans-mass transport units. And if you don’t identify those, you’re not really inclined to be focused on that as part of your other remedial steps, I mean, sorry, characterization steps, then you’re really trying to design into a system that you really don’t understand. And you’re going to save time in terms of covering a 20-foot section, because during the characterization you screen it across a wide interval because we didn’t quite understand what groundwater was doing and where the mass flux zones were.
And therefore, once you can do something like a design verification step, you’ve now rifle-shotted it in to maybe a four foot zone. So not only is this the time to apply reagent, it’s also putting it where you need it, not where you don’t. So you save time, you save money, because your applier isn’t out there a long time, and you’re saving cost of the product itself.
Rick: I’d like to take a step. This is a quick anecdote, and you’ll support me on this. In the mid-2000s when we we’re starting an in situ chemical oxidation work and we weren’t doing design verification work, it was not rare at all that we would go out to a site, inject an antioxiDanet and see surfacing, see…just see signs of a lot more mass present. And at those sites, we wouldn’t see the results that we’d expected. And you go back and collect additional samples and say, “Wow, there’s a ton of mass.” Or maybe there’s even free product. And you and I have had conversations like, “This is a really expensive way of doing site characterization by injecting oxiDanets and then defining hot spots and source areas.” And this testing is usually one to two days in duration using simple geoprobe in most conditions, and so this is a real simple way of collecting that data.
All right, we’ve got a few more here. Let’s see if I can find one that took off a little bit. I like this one because this is my style. They started by saying, “Thanks for the presentation. You indicated that something one should not proceed with the remediation of the volume was too much. What the heck does that mean? What is too much?”
I think what they’re referring to is reagent volumes. Craig, you want to take a stab at that one?
Craig: Sure. Maybe I was a mis…that poor language selection. I would say, try to think of it as: can we fit the remedial reagent? Are we under hydraulic limitation? Can we fit the reagent volumes with the necessary quantities at the proper solution percentages into the target treatment zone? And if the volume is too much, then you’re going to be fracking or you’re going to be surfacing just from mounding. So you have to keep those things in mind in trying to fit the reagent quantity in terms of mass of material in volume. Does that make sense?
Rick: I’ll take a stab on that one, too. In our remedial designs, we make assumptions and we might assume that we can get three to five gallons per minute via direct-push well. If we go out and do this around verification on our clear water injection tests of, “Hey, you’re going to have a hard time getting one gallon per minute.” And that means we would look for, “Hey, is direct-push the right approach here? Maybe we need to do permanent wells.” So when you talk about reagent volumes and what’s too much, it’s really related to how much volume do we need to get into the subsurface and how long is it going to reasonably take to get it in the ground?
Craig: Agree.
Rick: So next question is, can you discuss how designed verification differs from a pilot test? I’m a huge proponent of pilot tests. I use those as then feasibility tests, really, as they’re something.
I just would say design verification is not reagent injection. We’re not going out there and measuring results. These are simple tests collected to help us define the vertical treatment interval to identify areas with high mass flux zones and to ensure that the contaminant concentrations that we’re basing our reagent amounts on, that they’re in line. So it is different than a pilot test, both in terms of duration and the fact that we’re not collecting additional contaminant data in most cases.
Craig: Hey, Rick, I’d like to add to that. In the pilot scale testing, you’re not really figuring out whether that pilot test is actually optimized for that site, whether that pilot scale test has the proper zone identified, etc. It’s kind of…a pilot scale would be almost like a dose-response. You’re not learning about…now, if it’s a very simple site, okay. But if it’s heterogeneous, you may have missed locations and your pilot test might not even perform as you expect. You won’t know why…
Rick: There’s a quick one on the use of packers. Do we use packers to target specific treatment zones?
Craig: We absolutely do, and there’s a whole host of reasons to do it. But wells have to be installed properly to accommodate packer applications, but that’s a whole different discussion. But, yes, they’re quite effective and I’m a big proponent of using them.
Rick: Fine. We have a question here, and it’s a vineyard experience. What is the approximate costs of design verification studies based on the graph you presented? Over 70% of the time you did not change the remediation designs.
We may have not presented that clearly. In fact, about 62% of the time, we, in fact, do change the remedial designs. Those changes can be changes in injection intervals, reagent volumes. Most the time it doesn’t result in a change in cost. Keep in mind up in the data we presented, and that was on 28 sites, 10%, 11% of the time, we canceled the injection and said, “Hey, we need to take a step back or we need to make recommendations for alternate approaches or alternate technologies.”
Total cost, usually around 3% to 10%. That’s a very broad rule of thumb. We’re only talking about one the two days typically in the field with a geoprobe unit. You guys know how much geoprobes cost per day. It’s often in the $6,000 to $8,000 range would be a typical cost for a design verification.
Again, rule of thumb. Grab a few more here. We’re probably not going to get to all of them, but I want to try to capture…
Any comments on…this is for you, Craig. Any comments on accuracy of data using direct push characterization tools such as HPT versus more traditional methods of soil logging or laboratory analysis? I’m sure they probably want to add hi-res like MIP, things like that.
Craig: I do, and I really think that those are great tools. And I’m not saying you should do one or the other. In many cases, if the client and the setting is appropriate, I’m a big proponent of using high-resolution tools in the toolbox, including MIP, LIS, all the tools that are available to you based on what your site conditions are, etc. I believe there’s always something to be learned from ground truthing . Even CPT logs can get skewed. I believe that if you ground truth them with a single core where a geologist is looking at them and correlating the instruments’ responses to the soil types across that stratified zone, you get a sense of well-tied…it’s similar to in the oil field, using well-tied geophysical, where they have real log data and they have the electronic signature, and they can track it.
But short of that, I’m very bullish on it and I think it’s really helpful in untangling some of these concepts that I was talking about, between the fine and coarse grained units.
Rick: So Dane, I’m going to look for some guiDanece from you. I know we’re coming up on the hour. Do we have time for one more question?
Dane: Sure, yeah. We have a lot of questions, so we’re not going to be able to get to them all in time permitting, but, yeah. I think one more question is fine.
Rick: Okay. So Craig, I’ve tried to pick the hardest one, so this will be the…
Craig: Okay, great.
Rick: Are you emphasizing putting more substrate/reagent where the contaminant mass is located or where the contaminant mass flux is occurring? In your experience, are they typically the same or different?
Sorry, give me a minute to think about it. I kind of messed that question. I’m going to repeat it one more time.
Are you emphasizing putting more substrate where the contaminant mass is located or where the contaminant mass flux is occurring? In your experience, are they typically the same or different?
Craig: Well, I’ll answer the second question first. They are different. And I am emphasizing, depending on the strategy and the very specific site objectives and goals, I always think that if you want to shrink a plume rapidly and quickly, you must contact the mass flux zones, that zone that carries 90% of the mass and probably 10% of the aquifer. If you identify that and you are able to address that with an efficient application, a plume will shrink rapidly. As far as putting it where mass is, and this is a whole different kettle of fish, in terms of the source area, I would advocate absolutely putting it right where the mass is.
Now, if you get in really heterogeneous environments where you have a back to fusion, etc, that’s a different philosophy and an approach that I would have to talk with them about. But, essentially, I’m interested in mass flux in the body of the plume and I’m interested in putting it right on the contamination, in the more source or proximal to the source area.
Rick: Great. Well, good. Well, Dane, Craig and I will turn it over to you.
Dane: Alright. Thanks, Rick. And so we had a lot of questions today so if we didn’t get to your question, we’ll make an effort to follow up with you after the webinar. But that does conclude our presentation for today. Just a couple of reminders.
First, you will receive a follow-up email with a brief survey. We really appreciate your feedback, so please take a minute to let us know how we did it today. And also you’ll receive a link to the recording of this webinar as soon as it is available.
So thanks very much again to our presenters Mr. Rick Gillespie and Mr. Craig Sandefur, and thanks to everyone who could join us today. Have a great day.