In this presentation originally given at a workshop presented by NGWA, US EPA, and REGENESIS®, Jeremy Birnstingl, VP of Environmental Technology at REGENESIS, briefly explores the driving pressures and evolutionary background to the widespread single-technology design predisposition still evident across much of the industry, and outlines the technical basis of its inherent shortcomings in the dynamic and heterogeneous context of an impacted aquifer undergoing cleanup. The physicochemical principles favoring the use of integrated remedial approaches — both spatially and temporally — are summarized, and practical indicators for determining optimal points of inflection of technology change are outlined. The case is presented for incorporation of integrated design considerations with objective technology changeover trigger points into the initial remediation approval process, thereby securing efficiency and cost benefits to all stakeholders.
I’m Dr. Jeremy Birnstingl. I’m Vice President of Environmental Technology at REGENESIS, where I’ve worked for the last 13 years. I have a PhD in bioremediation from the University of Lancaster. I’m a biologist by primary training, and I’ve been working in the bioremediation sector for about 27 years now, in academia, in consulting, and actually in the defense sector too.
What I want to talk about here, some of the synergies of how like technologies will couple together, and I want to give a very simple road map for this to frame it, and then I want to get into some of the quantified benefits. There will not be as much technical detail as we’ve seen in some of the talks this morning, but this is more of a step back to get an idea of some of the trends.
Combined remedies, integrated treatment. The core thesis of this could really be summarized as follows. All remediation technologies have strengths and weaknesses, and these are different from one technology to another. Employing technologies in suitable combination can therefore enable strengths to be combined and weaknesses to be overcome. This, in turn, can increase efficiency, improve performance, and thereby save time, money, and resources. This is one of the core thesis points from the core principles from the Combined Remedies Initiative that is bringing you the workshop today, made up of USEPA, NGWA, Industry and Academia, and so forth.
I actually came across this much earlier. I came across it when I was perhaps one and a half or two years old, and it was taught to me by my grandmother or my mother on her knee. It went like this. “Jack Spratt could eat no fat, his wife could eat no lean. And so between them both, you see, they licked the platter clean.” It comes out of Mother Goose and still remains relevant today.
So, to get into my talk proper, I’m gonna start logically with a London Tube map. Now, a lot of commuters in London start, they’d say with a map. But here, it’s not so much the map that I’m after, it’s one of the principles that I want to draw from this. I want to use the idea of a simple road map that provides a place and a context for the optimal use of the principal technology sectors that we have at our disposal. And so, the first thing I have to say to this map is that it’s wrong. It’s wrong.Click Here To Read Full Transcript
The scale is completely mixed up, it’s completely inconsistent. It shows straight lines where there are curves, it shows curves where there are straight lines. It’s completely mixed up, and yet millions of people use it every day to successfully navigate a complex modern metropolis. So, as engineers, we gravitate to the detail, but sometimes, when we step back from the detail, we can get greater clarity on what’s going on, not only for ourselves but particularly for communication with others: clients, engineers, whoever it may be.
So, remediation technology integration, I would argue, has three pillars. One would be concentration-based integration. Another would be matrix-driven integration. And these can be combined to create efficiency-based integration, which will be the focus of the road map, and I’m gonna get a move on to some quantified case studies. Let me run through the first two quickly.
Concentration-based integration could be summarized on a table napkin or a sheet of paper or whatever, something like this. Generally, if you look at physical engineered systems, the cost of running them doesn’t actually change that much with the concentration of contaminant that you’ve got, same O&M, same power cost, whatever it may be. It may be more carbon or whatever, but not so much. If you use technologies that require electron donor or receptor reagent additions, the line is much steeper. The more contaminant you’ve got, the more reagent you’re gonna require. And the point is that these lines cross. Where they cross will depend on the particular case in point, but it’s generally somewhere up around about the smear, you might say.
What this means is that, below a certain concentration, a lot of time, a lot of money can be saved by using a biological approach over a physical approach. So, this gives rise to logical spatial integrations where the source might be treated with a treatment train: physical, chemical, biological. The plume might be treated with bio, maybe with chemical upfront, and that way, the clean-up can be spatially optimized, as we’ve seen in some of the examples earlier today. It can be sequential. It can follow a mass reduction, but it can also be matrix-related. So, let me talk briefly to the matrix-driven integration.
Often, we’ve seen clean-up from pump-and-treat systems, which becomes asymptotic and we hope it’s gonna flatline below the clean-up target. Normally, it flatlines above the clean-up target. The cost per kilogram of contaminant we remove, though, starts to mount up because the operation costs are the same. So, if we’re putting the same amount or smaller and smaller amount of mass coming out, then what was very efficient at the early stage becomes less efficient at the later stages.
This can best be visualized through some of the images that we saw this morning in this morning’s talks or, indeed, in the sandbox as Tom Sale pictured here. Here we’ve got the contaminant shown as fluorescein here back diffusing into some of these clay lenses here. When we turn on that pump-and-treat system, we can pump out a lot of the mass and we get back diffusion bleeding contamination out. What that tends to look like in a typical site may be something like this, in a very gross sense. We’ve got recoverable mass that was the fluorescein that could be moved. We’ve got a clean-up standard here, but we’re not getting there because of back diffusion.
So, a typical integration might be this: we might use a physical system for the gross mass removal. We may use a biological approach for cleaning up the last little bit. Why? Because, one, it’s basically effective in how it works, and the other is more diffusive how it works, depending on the bio reagents you use. If you can use slow-release reagents, they can diffuse some of their active ingredients into the matrix and take some of the contamination out there.
More recently with REGENESIS, we’ve been doing this a different way. We’ve been using a product called PlumeStop, which is a liquid-activated carbon, and the way that would work is simply this. We’d inject the material into the formation. Magnifying it up here, we will get a coating of very small carbon particles. This injects like an ink, it flows freely like a fluid. It’ll coat the formation and it will capture contamination that will be back diffusing out of the secondary porosity. Once it’s captured, bacteria which will grow on the carbon will break it down so that we’ve got a general self-purifying system of the carbon coating that we’ve put into formation. It’s a bit like turning the aquifer itself into a Brita filter.
When the material is coated on the particles of the sand, they’re very small units of carbon, they’re about one to two microns in size, they look something like that. The idea then is we can inject it in the ground. We get a wide-area distribution. Contaminants adsorb into it. We get a biofilm growth, contaminant degradation, etc. We don’t have time to go into that in this slide. Adsorption sites on the carbon get regenerated because the contaminant adsorbed into them is being biodegraded. And therefore, with further influx or back diffusion, we can continue to capture the contaminant, and this will go round and round and round and round.
Moving on to efficiency-based integration. We looked earlier at a graph of cost against concentration. I want to look instead at efficiency against concentration, and I want to define efficiency as the amount of contaminant removed by the amount of money that’s spent. Now, before I go any further, efficiency isn’t everything. Sometimes, in the interest of time or the interest on other factors, can be the right thing to do something that’s inefficient and really use a technology that’s not working at its best but might provide advantages in some ways. So, efficiency is contamination removed for money spent. Doesn’t necessarily mean it’s the best approach at that time, so there is flexibility here.
But the argument goes like this: physical systems tend to show their best efficiency at high concentrations, the amount of mass that you can remove with the money spent, free product recovery, good initial mass removal. Underestimated product doesn’t really matter, you just pump more out, and out it comes quickly. But at the bottom end of the scale, it’s plagued with diminishing returns, flatlining performance and rebound from back diffusion, etc. Bio, on the other hand, can be simple, can have low treatment cost, can eliminate rebound, can have minimal disturbance, and can have broad matrix applicability down at the lower end of the concentration range. But it can struggle up at the higher concentrations, excessive reagent costs and excessive times.
So, these simple Tube map lines cross. And they tend to cross pretty much, again, where there’s smear. But what you’ll notice is there’s a dip in the middle. At a certain point, when the concentration is being reduced by the physical systems, say an extraction system, there comes a point where it’s not quite putting stuff out as fast as it was before, but it’s not completely failed. And at the same time, at the top end of the scale, bios can still be working but maybe not quite as efficiently as it might if there’s no smear or free product present.
So, going back to the concentration slide that we looked at before, the way that this can be addressed is by simply putting another line on here. Now, chemical oxidation tends to sit somewhere in the middle here. Physical systems at the top end will always be ISCO putting free product out of the ground for example. Chemical oxidation is generally faster at higher concentrations because of the greater collision frequency between oxidants and contaminant. And bio, down at the bottom end, can be more efficient than ISCO at lower contaminant concentrations and can be a means of reducing rebound depending on the approach that’s used.
So, putting it all together, those lines will come together something like this: we get the chemical oxidation in the middle comfortably plugging the gap but really outflanked either side in efficiency by bio and physical approaches. So, this gives us a very logical way of segueing from one technology to the other, where we can go from physical till it starts to lose its efficiency into chemo and into bio. So, there’s a very simple Tube map, which I’d like to explore through some of the following case studies.
Now, Jim described two different types or three different types of remediation integration. He talked about Type 2, Type 1, and the Type 3, which was the package deals. One of the types, I think it was Type 2, was where one technology had sort of failed or got as far as it can, and then really just got replaced by another technology, but not through advanced planning. This is an example of that, but for reasons which are actually respectable.
It’s a solvent plume, it’s TCE, it’s 51 acres, it’s in Wisconsin. The consultant on the job was Symbiont. Background, it’d been an active metalworking site with a bunch of historic spills. A plume of TCE up to about 100 milligrams per liter. I’ve mentioned the size already, about 51 acres. It’s in a sugar sand overlaying compacted silt, low after [inaudible 00:11:29], I don’t really believe that means an awful lot. Groundwater is at about 20 feet. A hydraulic containment system was put in about 2000 and it was extracting about 70 gallons per minute, putting the liquid through an air stripper and activated carbon. All very familiar stuff. O&M, about $200,000 a year, energy cost, back at the time, about $10,000 a year.
After the first seven years, so we’re now about 2007, it had run efficiently more than 90% capacity, hydraulic containment was successful, and about a ton, about a thousand kilograms of VOC had been removed. Performance review at seven years identified that it probably wouldn’t reach the target within the next 30 years, and within the next 30 years can mean within the next 100 years quite possibly. So, essentially, what we’re getting is this. We’ve had a good initial clean-up and we’re flatlining before the clean-up standard. We’ve got a physical system that was working well but it is struggling now as the concentration’s getting lower.
The strategy shift was to move to an integrated program. The existing physical recovery was gonna segue into ISCO and was gonna segue into enhanced natural attenuation and then monitored natural attenuation. So, pretty much the sequence that we can see in the Tube map here. The chemical oxidation that was used used a catalyzed sodium percarbonate reagent. This has sort of to do with lining up the next shot as we heard in Jim’s talk again. Here, we’ve got an ISCO reagent that isn’t gonna leave a bunch of manganese in the ground and isn’t gonna leave a bunch of sulfate in the ground. Nothing necessarily wrong with that other than if you’re gonna go for bio afterwards with the chlorinated solvents, then the sulfate and the manganese are going to mess with your electron donor supply because they’re gonna behave as competing electron acceptors.
The RegenOx is percarbonate-based and doesn’t have such residues. This was applied through 43 core wells and the injection campaign followed an equilibrium period and then shifted to enhanced natural attenuation using some high-strengths and high dispersive slow release electron donors through the same wells and then a 100 more wells through the plume core.
Performance was, the physical system had moved about 1,000 kilograms. In the chemical phase, it secured about 78% to 100% concentration reduction, achieving the switch-points in all of the wells at the end of the equilibration period. The bio phase was applied in phases between 2008 and 2010. It’s kind of ongoing because the donors are sort of petering out now at the end of their supply, and it’s gonna be run for a period before it can be considered closed. But we’ve achieved something like 93% concentration reductions in the first two years, which allowed the physical treatment system to be just left on standby initially and then decommissioned. So, this is the phys-chem-bio change.
The time saving then, based on the 30-year projection of 2007, would be more than 25 years based on a project completion forecast for this year. The cost-savings based on the 2011 rates would be something like $6 million, the waste savings would be $2 million, and the energy savings would be $300,000 from running the pump-and-treat systems and factoring out the other reagents. These give some indication to the benefit. It’s not that the first technology was wrong, but in 2000, that was much more of a standard technology to use and the frame shifted as time went on and new technologies became proven and used.
The next one that I’d like to drill into is a project in Sweden. This is an interesting site and I’d look more deeply. This is an example of remediation integration that’s designed from the onset. And in this one, we’ve got an example to actually look at how the different technology costs would stack together or used as standalones, and this is done as part of the design.
Background to the site, it’s a military fuel storage facility. There’s been a number of historic spills over its history. It’s actually a secret facility, although everybody in Sweden knows where it is, that’s Sweden. The largest spill followed an explosion in 1958, 14 million liters. That’s 3.7 million gallons of fuel spilled out of the facility. Now, to put that in perspective, that’s about 500 of those. So, that’s enough to drive your car around the circumference of the Earth 3,000 times, or 5,000 times if you’re in a Swedish car. About 6 million liters, 1.6 million gallons, escaped the facility. That’s 200 of those, flowed down a forested hillside and into a lake. A lot of jet fuel, a lot of gasoline, a bit of diesel.
So, we’re in the 1950s, let’s clean it up. Well, the early 1950s remediation was really quite straightforward, they set the lake on fire. So, there was a lot of other hydrocarbon that hadn’t quite made to the lake, so that was forming a lake of its own on the shore. So, that’s a simple answer, they covered it with soil and turned it into a children’s play area. I’m not gonna say anything about Love Canal. It sat like that for a long time.
In 2012, there was a new strategy that was tendered, which was specifying excavation, bio-sparging, and hydrogen peroxide. It was won by a contractor on those grounds, but after winning it, it was adjusted on the grounds of safety. If you think about it, a million liters of hydrogen peroxide, that’s 264,000 gallons delivered to a secret location in the woods in Sweden. It’s going to attract attention beyond the health and safety issues. I know, this is the days of higher security, etc. Also, working on a boulder field on a forested mountainside was fairly challenging of how that was gonna be delivered, preferential pathways, impacts on vegetation.
The revised strategy was to basically use controlled-release alternatives with the same fundamental technologies. So, we had enhanced multi-phase extraction. Ligand-stabilized hydrogen peroxide, which is a reagent we had before, RegenOx, the catalyzed sodium percarbonate, but basically its core is hydrogen peroxide with the slow release. We have an injectable slow-release oxygen source to use instead of the bio-sparging, which is ORC-Advanced. And an integrated strategy, as we’ve seen before.
The site was broken up into multiple zones, and they looked something like this. The MPE zone, where the concentrations were worst, was about 4,000 cubic yards. ISCO then followed that but also had another 13,000 cubic yards of its own. Bio then followed the previous MPE zone, the previous ISCO zone, and another 43,000 cubic yards of its own.
These were the switch points that were proposed for this. The physical extraction was to remove the free product. Chem then followed to go from the sheen down from an average of about 40 ppm to 10 ppm. The fat end of the bio was about 10 ppm going down to a target of about 500 micrograms per liter with an average of about 4,000. Broken down, there are actually different levels of concentration that we used with different doses, so there was a bit more to it than the basics that I showed you.
So, let’s look at the technology costs applied to each of those zones: MPE, ISCO, and bio. And let’s look at what those costs would be if those were gonna be used alone, and then what the cost would be simply to get to the next concentration band from where you start to the next switch point. So, let’s look at the one-size-fits-all approach that might come from, say, an options matrix approval, where just one technology was selected for the site.
If we look at MPE in the MPE zone, the unit cost was gonna be something like $88 per cubic yard, about 4,000 cubic yards, there’s the price. Very generously, if we assumed that rather than running for the six months here, it was gonna run for nine months, and then go from in the ISCO zone, I think it was 40 ppm down to 10 ppm from smear in nine months, those are the price you’d get. And if it was run for another nine months after that to go from an average of 10 ppm down to half a microgram per liter with no rebound, that’s what the cost would be. These are very generous assumptions for MPE. In its own zone, of course, it’s the cheapest.
ISCO is the similar approach. If we look at the concentrations or the ISCO requirements that would be used for the different concentration bands, we would see something like this. In its own zone, it comes out equal cheapest. The same dose would be used in the bio zone, but the actual total mass destruction that would be achieved from that through diminishing returns would be much less. Bio, well, with the lower concentrations, naturally it’s gonna be the cheaper approach because the amount of oxygen that needs to be supplied and hence, the amount of reagent needs to be supplied. As the concentrations go up and the mass increases, the cost of the bio would equally ramp up.
So, what we can see here is that each technology is the cheapest option in its own silo, as you would expect: high concentration, medium concentration, low concentration from that. But the costs as standalone in each silo remain considerable. We’re looking at 10 million or 7 million or 12 million, if any of them were used as standalones. So, logically, what’s gonna happen if we combine them and use each of them in their sweet spot? So, this is what it would look like. These are the costs of MPE used solo and these are the costs of each of the technologies used solo and these are the costs of each of the technology just used in their power band. So, these are the numbers that we just had of the previous screens.
What we can see here is that the combined remedy total, the total cost of clean-up by using the technologies in synergy is cheaper than the cost of using any one of the technologies as a magic bullet, by quite a significant amount. The scale of the saving? Well, if you run the numbers, it looks something like this. We would be saving 63% over a very generous multi-phase assumption, about 50% over the ISCO and about 68% over bio alone. Although I think this is grossly understated because the bio simply probably would not have worked at the top level. The treatment train approach comes in at about $4 million, 3 million or so euros at the time.
“MPE should be used as free product is present,” if we were choosing one technology, I’ve heard things like that said. “ISCO should be used as it presents the lowest cost,” there is it right there. I’ve heard that said before. “Bio is too expensive and too slow.” I’ve heard that said. Each of these miss the point that the combination can save money.
So, how does efficiency compare in each of these zones? Well, here are the costs in the MPE zone, where we’d look at how the cost would compare for the same actual treatment. ISCO is about three times the cost, bio is about 12 times the cost of MPE. In the ISCO zone, where we’ve got an average of 40 ppm and we’re trying to get down to 10, ISCO has the home field advantage here and would cost $87 per cubic yard compared to its bedfellows. Again, to get from 4 ppm down to about half a ppm, these will be the cost projected for MPE and ISCO. Bio is the cheapest. So, putting them together, we have something like this. And so, essentially, those are the work numbers based on this road map.
Now, I’m tempted because I’m the chairman to carry on talking, but I should really stop at this stage and leave some generous time before segueing into the rest of the presentation. This is a good period to slow down. So, with that, I’d like to thank you for your attention, and I’m happy to open to any questions that you may have. Thank you very much.