The Weekly Carboholic: Gas industry's own fracking studies don't support industry claims

Posted on July 15, 2009

10


carboholic

frack

“Fracking” is the slang term used for hydraulic fracturing, a process by which the gas industry injects a slurry of unknown composition into a gas well in order to break up the rock and release the natural gas contained within. At present, the EPA exempts fracking from regulation under the Safe Drinking Water Act (SDWA), but Representative Diana DeGette of Colorado has introduced legislation into the House (H.R.2766) to force the EPA to regulate fracking. In response, the gas industry has pushed back with studies that purport to show that regulation is both unnecessary and costly.

A new article by ProPublica, an “independent, non-profit newsroom that produces investigative journalism in the public interest,” shows that the exact same studies being used by industry to oppose fracking actually counter the industry’s own arguments.

The gas industry claims that there is already sufficient regulation and oversight of fracking at the state level. The ProPublica article contests this claim, pointing out the following:

In fact, the report calls for some of same measures found in the congressional bill the industry is so hotly contesting.

Regarding fracturing in areas close to the surface or near shallow aquifers, the report reads: “States should consider requiring companies to submit a list of additives used in formation fracturing and their concentration.” It also says that shallow fracturing very close to certain drinking water aquifers “should either be stopped, or restricted to the use of materials that do not pose a risk of endangering ground water and do not have the potential to cause human health effects.”

The additives issue is specifically addressed in HR2766, just as the ProPublica article says:

In subparagraph (C) of paragraph (1) insert before the semicolon `, including a requirement that any person using hydraulic fracturing disclose to the State (or the Administrator if the Administrator has primary enforcement responsibility in the State) the chemical constituents (but not the proprietary chemical formulas) used in the fracturing process’. (Section 2 (b)(1))

The bigger problem is that, according to the article, “21 of the 31 states listed do not have any specific regulation addressing hydraulic fracturing; 17 states do not require companies to list the chemicals they put in the ground; and no state requires companies to track how much drilling fluid they pump into or remove from the earth — crucial data for determining what portion of chemicals has been discarded underground.”

So much for “the states do a great job regulating fracking already.”

As to the cost question, the study that supposedly claims that the cost of complying with the SDWA is about $100,000 per gas well has a number of major flaws. For example, the study uses data that’s 10 years old, it estimates costs for tests that aren’t required by the SDWA, and the vice president of the group who did the study (and was interviewed for the ProPublica article) believes “that many of the processes listed in the report are already being practiced to a greater degree than they were in 1999, meaning that even if they were required they may not be additional burdens at all.”

An estimate produced by Deutsche Bank analysts found something radically different from the industry’s preferred studies:

If all the testing that Godec includes is factored out, the regulations would cost the industry just $4,500 per well, according to his report, or just six hundredths of a percent of the cost of establishing a typical new well.

The jury’s still out on whether fracking is a threat to water supplies (anecdotes are not data), but one thing is abundantly clear: the industry didn’t do itself any favors by misrepresenting and/or cherry-picking study data and findings in order to oppose federal fracking legislation.

———-

solfocusCan concentrating photovoltaic compete with solar thermal and standard photovoltaic?

Photovoltaic (PV) electricity is notoriously inefficient. The theoretical maximum for a simple PV cell irradiated by a single sun (equivalent irradiance) is 31%, which is less than half of the efficiency of the best coal generation. More complex PV cells rely on the absorption of multiple light frequencies or the concentration of solar energy to achieve greater efficiencies. While there have been some interesting recent developments in solar power such as so-called combined-cycle solar, these developments aren’t intended for utility-scale electricity generation. A new technology reported by Greenwire has the potential to provide gigawatts of electricity – concentrating photovoltaic (CPV).

The point of CPV is to make solar electricity cheaper. Compared to solar thermal (the concentrating of sunlight on a tower that boils water to turn an electrical turbine), CPV uses much less water and has a more distributed footprint. Given that the same areas that are good for solar power are also short on water resources, cutting water consumption by over 99% is a huge deal. In addition, environmentalists are already getting concerned about large swaths of desert being converted into solar thermal and standard solar PV farms, with the accompanying environmental degradation and loss of wild space. CPV, on the other hand, is more like wind turbines – they can be spread out and the area between and underneath CPV structures can still be used for other purposes.

As far as the energy economics of the technology, one company mentioned in the article did a “cradle-to-grave” energy analysis and found that it takes only six months for their CPV technology to start producing more energy than it took to create the CPV structure in the first place.

But as good as CPV appears to be, the Greenwire article points out that CPV suffers from the same problem that all solar does right now – it needs government support in order to survive long enough to become cost-competitive with other supplies like natural gas and coal (although it’s on track to reach parity with other solar technologies in the next year or two).

CPV sounds like a great technology to me because it appears to be far more environmentally friendly than solar thermal is. But in the energy sector, as with commercial commodity products, the best technology doesn’t always win in the end. Instead, marketing, financing, and political influence are greater indicators of success than low water consumption, a small carbon footprint, and a smaller physical footprint.

———-

Wind turbines may affect weather

If you’ve ever laid down on the ground during a windstorm, you probably noticed that wind at ground level is much slower than wind at head height or a couple of hundred feet in the air. This is the main reason that wind turbines are raised up on massive towers – the wind blows stronger and more consistently high above the ground, making the turbine more efficient as a result. Similarly, wind doesn’t blow through the forest itself as fast as it blows does through open clearings. And faster or slower wind speeds has an effect on the weather downwind.

But what happens when you cover large swaths of land with extra-tall steel trees with spinning branches (aka wind turbines)? The Bright Green Blog at the Christian Science Monitor has a article devoted to answering this very question.

According to the article, wind farms on the scale of North American storm systems has an appreciable affect, in this case defined as “larger than typical weather-forecast uncertainties,” with the effects felt not just in North America but also across the North Atlantic and on into Europe. Of course, the size of a storm system is very often tens of miles in diameter and can be hundreds of miles across, but if you covered the Midwest with turbines, well, that’s certainly going to be large enough to qualify.

What does this mean? Well, the scientists interviewed for the article said that the impacts on wind speed, cloudiness, and temperature, but that the impacts were small compared to the benefits of removing carbon dioxide (CO2) from the atmosphere. Beyond that, though, the scientists weren’t comfortable speculating.

———-

Geoengineering doesn’t help acidification

According to a new study in the journal Geophysical Research Letter and reported by Standford University News, some forms of geoengineering may cool the planet but do nothing to reverse the effects of ocean acidification.

In the immortal words of Obviousman: No Duh!

Ocean acidification is a result of the burning of fossil fuels. In essence, the CO2 is emitted in to the air and then absorbed by the ocean, resulting in the creation of carbonic acid and a corresponding reduction in ocean pH. Geoengineering schemes like covering up the sun with a sunshield in space or emitting large amounts of sulfur dioxide into the stratosphere or seeding more clouds with a fleet of automated seawater spraying ships all work on the same basic principle – reduce the amount of solar radiation reaching the earth’s surface. However, none of them pull the extra CO2 out of the atmosphere that would be required to stop additional ocean acidification.

Perhaps I’m being a little too harsh on the study authors. They did run the geoengineering method through climate models in order to better understand how ocean acidification will be affected, and that’s valuable information to have. But the overall conclusion – changing solar insolation via geoengineering does nothing to stop ocean acidification – well, duh.

geoeng

———-

New study on the PETM raises questions, but no answers

55 million years ago, the Palaeocene-Eocene Thermal Maximum (PETM) produced between five and nine degrees Celsius of warming globally, and that warming lasted for tens of thousands of years. A new study published in Nature Geoscience investigated the PETM using a single climate model and claims to have found that CO2 alone was insufficient to have caused the PETM.

However, once you read the actual paper, it’s not quite that clear-cut. First off, the PETM happened during a geological era when the Pacific was much larger than it is today and the Atlantic was much smaller and the Earth was much warmer before the PETM than today. And the authors of the study acknowledge all these points:

Undoubtedly, the climatic boundary conditions before the PETM were different from today’s – including different continental configuration, absence of continental ice and a different base climate, which limits the PETM’s suitability as the perfect future analogue.” (emphasis mine)

Second, the study investigates a single climate model, rather than the many different climate models that are available. Even so, though, the study does raise a couple of important questions that really should be answered.

The first question is whether, as the authors claim, this study represents “a fundamental gap in our understanding” of climate that “needs to be filled to confidently predict future climate change.” It certainly suggests that we don’t understand enough, but is the problem our understanding of the PETM, our understanding of recent climate change, or both? At this point, there’s not enough information to know the answer to that question. After all, some scientists have suggested that climate models are insufficient to predict the long-term changes to the Earth’s climate resulting from anthropogenic CO2, and that over the next thousand years, CO2 will actually drive far more heating than it does over the next century.

The second question is whether this study supports the contention that climate models are underestimating the effects of anthropogenic climate disruption. The authors found that their climate model only accounted for approximately 3.5 degrees of the five to nine degrees of warming that actually occurred during the PETM. If this is accurate, then this study could mean that the models to date have underestimated the effect of CO2 emissions by 43% to 157%, and that climate disruption during the next century or two could be much, much worse than it is already expected to be.

Which question you focus on probably depends more on whether you’re a climate disruption “skeptic” or denier, or whether you’re a proponent of anthropogenic climate disruption.

———-

billion-fig6New climate idea might break US-China emissions stalemate

The U.S. won’t cut emissions until China and India are on-board, but China and India won’t cut emissions unless the U.S. and Europe cut even more. This Catch-22 of blame justifying a refusal to act has dominated post-Kyoto Protocol climate politics for years now, and recent news suggests that it’s not going to get better any time soon. Into this stalemate steps some of the same Princeton researchers who developed the climate wedge visualization aid with a possible new approach that is agnostic to the source of the CO2.

The idea is to force the top billion or so people who are the highest CO2 emitters to cut their emissions, no matter where those emitters are located on the globe. The U.S. would still have a huge number of people who needed to cut their emissions (people like me, for example) somehow, but so would a large number of Chinese, most of the EU, Russia, and even a few countries in Africa and the Middle East. But this scheme would automatically exempt, at least to start with, the poorest countries and even permit them to increase their emissions. This scheme would also rope in developing nations as they started to reach the per-capita emissions cap, so a country like India that is presently mostly under the cap would find itself having to start paying automatically as their economy improves over the next several decades.

It’s an interesting idea, and given that it might enable real action on climate disruption and CO2 emissions, it’s certainly worth considering. I look forward to hearing more about it in the coming months, especially if it starts to get traction among climate policy wonks.

Image credits:
AAPG.org
Goodcleantech.com
Stanford.edu
PNAS

Advertisements
Posted in: Uncategorized