The U.S. Department of Agriculture's National Institute of Food and Agriculture (NIFA) is investing $20 million dollars into each of three major studies looking at the effects of climate change on agriculture and forest production.
1. Dr. Lois Wright Morton of Iowa State University will lead a research team estimating the carbon, nitrogen and water footprints of corn production in the Midwest. The team will evaluate the effects of various crop management practices when various climate models are applied. The Iowa State project, which includes researchers from 11 institutions in nine states, will integrate education and outreach components across all aspects of the project, specifically focusing on a place-based education and outreach program called “I-FARM.” This interactive tool will help the team analyze the economic, agronomic and social acceptability of using various crop management practices to adapt and mitigate to the effects of climate change.
2. Dr. Tim Martin, of the University of Florida, will lead a team looking at climate change mitigation and adaptation as it relates to southern pines, particularly loblolly pine, which comprises 80 percent of the planted forestland in the Southeast. The team of 12 institutions will establish a regional network to monitor the effects of climate and management on forest carbon sequestration. Research in the project will provide information that can be used to guide planting of pine in future climates, and to develop management systems that enable forests to sequester more carbon and to remain robust in the face of changing climate.
3. Dr. Sanford Eigenbrode, of the University of Idaho, willlead a team monitoring changes in soil carbon and nitrogen levels and greenhouse gas emissions related to the mitigation of and adaptation to climate change in the region’s agriculture, which produces 13 percent of the nation’s wheat supply and 80 percent of its specialty soft white wheat for export. The research team will look at the effects of current and potential alternative cropping systems on greenhouse gas emissions, carbon, nitrogen and water-levels and how that, in turn, affects the local and regional farm economy.
“Climate change has already had an impact on agriculture production," said NIFA Director Roger Beachy. “These projects ensure we have the best available tools to accurately measure the effects of climate change on agriculture, develop effective methods to sustain productivity in a changing environment and pass these resources on to the farmers and industry professionals who can put the research into practice.”
For further details, see the full press release here.
On the Environment
The U.S. Department of Agriculture's National Institute of Food and Agriculture (NIFA) is investing $20 million dollars into each of three major studies looking at the effects of climate change on agriculture and forest production.
There are many valuable lessons to be drawn from the Regional Greenhouse Gas Initiative (RGGI), the nation's only operational, and mandatory, cap-and-trade program. One worth dwelling on is the effectiveness of RGGI's CO2 emissions cap. Recent analysis suggests this cap is much too forgiving -- not just now, but, more importantly, also over the next two decades.
The whole point of the RGGI emissions cap is to create a market for CO2 emissions from power plants that will ultimately drive down those emissions over time in the most economically efficient way possible. A relatively harder cap - one set below actual CO2 emissions, for example - should make RGGI's tradeable CO2 pollution allowances more scarce and thus more valuable to polluters, resulting in higher prices per allowance than a cap set above actual emissions would. The key idea here is that RGGI's cap on CO2 emissions from its regulated entities - electric utilities basically - creates a new market that has the potential to push those utilities towards low- or no-carbon generation. Where policy makers set the cap can therefore matter a great deal; a relatively tough one pushes harder than a relatively lenient one. This chart, produced on behalf of RGGI, strongly suggests the RGGI cap is not hard enough now, nor will it be hard enough in the future:
The important lines to look at for our purposes are the dashed one - that's the RGGI cap as set by agreement of the RGGI members - and the solid black line - that's both historical and projected total CO2 emissions from RGGI's regulated entities. You can see that presently, the cap is simply way too high (and to be fair, some of that is on purpose). The factors behind the recent massive drop in actual CO2 emissions are several (more on that later). The recession undoubtedly plays a huge part. Nevertheless, the cap just does not appear to be exerting real pressure on utilities right now. Maybe that's not a problem. There's an argument that a soft cap is just fine early on, as we refine and tweak RGGI. That argument might be even stronger in the current economic climate. No need to clamp down on utilities in the midst of the recession.
So perhaps the short-term performance issues of the cap are okay to put aside for the moment. That's not at all true for the long-term performance issues. Here's the major problem, and one policy makers should make an urgent focus of their thinking: According to these projections, the cap doesn't appear to really bite until maybe 2030 or later, and that's just too late in the scheme of things. Climate science tells us we need meaningful CO2 reductions much much sooner than that to avoid catastrophic harms. So what's the point of an emissions cap if it doesn't drive change when we need it? It's time to give serious thought to how best to tighten the RGGI cap to make it better correspond with the scientific reality we find ourselves in.
As oil prices increase and energy security becomes a concern in the US, more is being done to explore cleaner burning fuels such as natural gas. Natural gas has seen big increases in the number of wells and total production as shale gas extraction, in particular, intensifies. The EPA projects that 20% of US gas supply will come from shale gas by 2020.
An EPA report in 2004 found that "there was little to no risk of fracturing fluid contaminating underground sources of drinking water during hydraulic fracturing of coalbed methane production wells." But public concern over the process by which shale gas is extracted known as hydraulic fracturing, or "fracking," has escalated with the growing number of wells. Each well requires the pumping of tremendous amounts of fracking fluid into the earth and, according to the EPA's 2004 report, "[t]here is very little documented research on the environmental impacts that result from the injection and migration of these fluids into subsurface formations, soils, and USDWs." Until last year (when the EPA called for the voluntary reporting of chemicals used in fracking fluids), many of the chemicals used in fracking were unknown. Chemicals now known to sometimes be involved in the process include: diesel fuel (which contains benzene and other toxic chemicals), polycyclic aromatic hydrocarbons, methanol, formaldehyde, ethylene glycol, hydrochloric acid, and sodium hydroxide. Given this situation, the EPA has announced another study to examine the effects of hydraulic fracturing on drinking water and groundwater. The EPA aims to issue preliminary findings in 2012 and a full report in 2014. The draft study plan is available at here.
The Chinese Ministry of Environmental Protection has been drafting new proposals (see rough Google translation in English here) to amend former daily reporting of the Air Pollution Index (API), which has been used the last 20 years to communicate air quality and health hazards posed by air pollution on any given day. In 2000 the MEP (then the State Environmental Protection Agency, or SEPA) began reporting a daily API for 42 cities; now, data for 113 cities are available from the China National Environmental Monitoring Center.
Although these new specifications are still in draft form, it’s interesting to take a look to see what the MEP is considering, particularly in light of the fact that China’s annual National People’s Congress parliamentary meetings just concluded and approved the 12th Five-Year Plan (full text in Chinese here). As I’ve written with my colleague Deborah Seligsohn of the World Resources Institute, the new Plan includes an ambitious range of energy and environmental targets, including those for air pollutants like SO2 and for the first time nitrogen oxides (NOx). While the Plan included a blueprint for major reduction goals for these criteria pollutants, the specifics as to how targets will be allocated and policies implemented still remain to be developed over the coming months by the individual Ministries, provinces, and cities.
These draft AQI guidelines provide some insight as to how monitoring of criteria air pollutants might change as a result of the Plan. I’ve taken a look at the second draft, which the MEP posted on their website during the last week of February, prior to the release of the 12th FYP. Most notably, the new specifications appear to reflect a greater attempt by the Chinese MEP to make the former API more consistent with the United States’ Air Quality Index. This effort is reflected through:
- Renaming the API the “Air Quality Index” to be consistent with the U.S.’s nomenclature.
- Providing a consistent color classification system identical to the U.S.’s AQI color scheme.
- Descriptions of health effects of AQI scores in language similar to that used by the AQI
- Inclusion of new particulates carbon monoxide (CO) and Ozone (O3), which were previously absent from the API
- Changes to the calculation methodology to reflect the U.S.’s AQI
I’ll spend the rest of this post highlighting some of these proposed changes.
Even though the API was originally based on the United States’ AQI, there are differences in the scale and corresponding health hazard categorizations as well as color classification schemes. Inconsistencies in color classifications within China are due to the fact that unlike in the United States, where the AQI colors are standardized, local environment protection bureaus (EPBs) in China have been allowed to set their own color schemes. As Figure 1 depicts, the MEP is proposing a color coding scheme that is entirely consistent with the U.S.’s AQI, unlike the example Andrews (2009) shows of conflicting colors between Beijing and Guangzhou that could be confusing for travelers between the two cities. Making the colors classifications the same as the United States will also provide more transparency and clarity for those familiar with the U.S. AQI system.
Further, while the descriptions of the AQI classes (i.e. ‘Excellent,’ ‘Good,’ etc.) haven’t changed from the previous API, the descriptions of the Health Effects are similar to those provided by the U.S. EPA.
How is the new AQI calculated?
Remember that the API was determined from only using three pollutants: SO2, NO2, and PM10. The concentration of each pollutant is measured at various monitoring stations throughout a city over a 24-hour period (noon to noon). The average daily concentration of each pollutant is then converted to a normalized index, which means that each pollutant is given its own API score. The daily API then only reflects the pollutant with the highest API (see Vance Wagner’s concise explanation on how the API is calculated).
As shown in Figure 2, the AQI now includes three additional measures: carbon monoxide (CO), Ozone (O3) – 1 hour average and Ozone (O3) – 8 hour average. The concentrations of each of these six pollutants are normalized according to the table in Figure 2. The (1),(2), and (3) annotations mean that the MEP is still debating what concentration levels should be. Several options under consideration are found in the “Instruction Manual” document of the draft proposals (in Chinese only).
What is notably missing, however, is a measure of PM 2.5 (air particulates with a diameter of 2.5 microns or less; known to have serious health implications such as asthma, lung cancer, and cardiovascular disease, due to their ability to penetrate human lungs). Despite the fact that many major countries report PM 2.5 concentration data, some are viewing the lack of PM 2.5 from the new AQI as a major disappointment.
Ma Jun, Director of the Institute of Public and Environmental Affairs (IPE), the Beijing-based NGO who released the Air Quality Transparency Index I wrote about earlier this year, said in a Global Times article that leaving it out was a mistake.
He goes on to say:
“Technology for measuring PM2.5 is not a problem for China any more, as cities in developing countries like New Delhi and Mexico City have already made the index public,” said Ma. He said a reluctance to include the crucial index has to do with concerns about local economies.
“Government agencies feel the index may hurt the image of many cities that want to attract investment or that they may not be able to improve PM2.5 pollution in a short time,” Ma said.
However, contrary to Ma Jun’s assessment that it’s not a technological capacity issue, I wonder if the decision to leave PM 2.5 out of the new AQI isn’t really due to local availability of PM2.5 monitoring technology. There is a sentence (6.1 地方各级环境保护行政主管部门可根据当地的实际情况和环境保护工作的需要,参照本 标准的要求,增加空气污染物评价项目,如细颗粒物(PM2.5 等)) on Page 4 of the draft proposal that says that local EPBs can consider projects that increase the range of pollutant monitoring, specifically mentioning PM2.5.
On the other hand, one notable improvement in the AQI’s calculation is the change in methodology proposed. While the U.S.’s AQI is based on the highest reading in a city and thus represents the “worst” air quality case a person could encounter, the Chinese API represents an average. The draft proposals improve upon the API’s methodology, adopting a similar calculation method to that of the U.S.
First, “individual AQIs” are calculated as follows:
IAQIp is the individual AQI;
Cp is the concentration of the six pollutants (SO2, NO2, PM10, CO, O3-1hr and O3-8 hour averages. If a city has more than one monitoring station, the average of the pollutant concentrations are used [对于城市区域为多测点的日均浓度值]).
BP(hi) is the pollutant with the highest concentration
BP(lo) is the pollutant with the lowest concentration
IAQI(hi) is the index score of BP(hi) (on the IAQI 0-500)
IAQI(lo) is the index score of BP(lo) (on the IAQI 0-500)
The max of these IAQIs (Figure 4) is then used to determine the AQI.
I will spend some more time going through the longer instruction guidelines for the proposals and will update this post if I find more details. In the meantime, I welcome any comments or alternative interpretations.
Andrews, S.Q. 2009. “Seeing through the Smog: Understanding the Limits of Chinese Air Pollution Reporting.” China Environment Forum, Vol. 10. http://www.wilsoncenter.org/topics/pubs/andrews_feature_ces10.pdf
Ministry of Environmental Protection. 2011. Technical Regulation on Ambient Air Quality Index Daily Report. Second Draft. Available here: http://www.mep.gov.cn/pv_obj_cache/pv_obj_id_47B37A70B7A7F94EBAE2DC9709456678C1210400/filename/W020110301385498176520.pdf
Thanks to Chris Haagen for providing some translation assistance.
A report yesterday from Inside EPA offered a fascinating overview of the agency’s struggle to update the way it assigns dollar values to the suffering and premature death that its regulations prevent. Seriously, as far as economic esoterica goes, this stuff is riveting. What’s more, your life may depend on it.
Currently, EPA values each statistical human life saved by its rules at $7.9 million. This number is derived from so-called “wage-risk premium” studies that examine large data sets on employment and occupational risk. The idea is that, if you control for education, job sector, geographic region, and other relevant factors, then you should be able to come up with a number representing the portion of an employee’s wage that compensates for higher on-the-job health or safety risks. Depending on how a worker values health and safety compared to other goods, he – and he is an important distinction here since the value-of-life studies tend to only look at male-dominated blue collar jobs – might be willing to take a higher wage in exchange for accepting higher levels of occupational risk. In theory, then, the studies can pull out the amount at which workers themselves value risk exposure, which can then be converted into a uniform “value of a statistical life” (VSL) for policy analysis. By using the VSL number to value the health and safety benefits of regulations, EPA can avoid the messy task of government deciding on its own how much protection is worth investing in.
According to the Inside EPA report, staff experts are recommending a new, updated methodology, but the agency’s Environmental Economics Advisory Committee (EEAC) cautioned that the new method might be “too complicated for non-specialists to understand.” This claim is a real howler as it seems to imply that the current methodology is accessible to non-specialists. It is not. Deep and controversial value judgments are embedded within the current methodology, ones that lay persons can scarcely glean. For instance, studies show that union workers receive much higher wage-risk premiums than non-union workers – a finding that suggests bargaining power has a lot to do with the market outcomes that are supposedly capturing individuals’ true “preference” for life preservation. Should EPA use the higher union VSL, rather than the lower non-union VSL that economists tend to favor? This is not a matter of expertise. It is a value judgment that should include a full range of democratic inputs, but its import instead is buried deep within the technicalities of economic regression models.
Apparently the EEAC wants to push EPA even deeper into the weeds by asking the agency to compile a unique VSL figure for each regulatory context that the agency addresses. For instance, if a mercury emissions regulation would disproportionately benefit Native Americans (who eat far more contaminated fish than the general population), then the monetary value of reducing mercury exposure would be calculated using studies that find out how much Native Americans in particular are willing to invest in health and safety. In theory, this would bring the agency closer to the economists’ ideal world in which all values are assessed by the affected individuals themselves, rather than by collective democratic processes. In practice, however, it would involve the government intimately in the perpetuation of discrimination.
The VSL is affected not only by an individual or group’s willingness to invest in health or safety, but also by the ability to do so. This is made clear by the difference between union and non-union VSL data. It is also made clear by studies that show certain minority groups, especially African-Americans, actually receive significantly lower wage-risk premium than should be expected based on their occupational hazard exposure. We might say this represents a weaker “preference” for staying alive among those groups, so that if EPA’s cost-benefit calculations weigh benefits to them at a lower rate than non-minorities, then, well, that’s just giving the people what they “want.” Alternatively, we might say that the picture is messier than this, and that past injustices continue to impact deeply the social and economic opportunities available to individuals and groups today. Treating current market outcomes as somehow neutral and objective does not wash the government’s hands of this history.
The VSL debate is a gripping saga, one with more than a little fiction in it, but with all too real consequences. And it is anything but accessible to non-specialists. For an attempt to break it down in more detail, and for supporting citations, see Chapter 4 of my book, Regulating from Nowhere: Environmental Law and the Search for Objectivity.
Cargill, the international agriculture giant, is installing a 320-square-meter kite on one of its chartered shipping vessels in the hopes of improving fuel efficiency and reducing greenhouse gas emissions.
The kite, made by Hamburg, Germany-based SkySails, is designed to cut fuel consumption by up to 35 percent under ideal sailing conditions. It flies ahead of the ship at a height between 100 meters and 420 meters to generate propulsion; it is computer controlled by an automatic pod to maximize wind benefits and requires only minimal handling by the crew.
"For some time, we have been searching for a project that can help drive environmental best practice within the shipping industry and see this as a meaningful first step," said G.J. van den Akker, head of Cargill's ocean transportation business. "The shipping industry currently supports 90 percent of the world's international physical trade. In a world of finite resources, environmental stewardship makes good business sense. As one of the world's largest charterers of dry bulk freight, we take this commitment extremely seriously."
Cargill transports more than 185 million tons of commodities annually.
The disaster in Japan has focused new attention on nuclear power in the United States. Here are the basic contours: At present, the U.S. has 104 nuclear plants in 31 states - producing 20% of the nation's electricity. Of the pending proposals to build 30 new units, it is likely that fewer than seven will be built before 2020. No new power plants have been built in the U.S. since the partial meltdown at Three Mile Island in Pennsylvania in 1979. The Obama Administration wants to ramp up nuclear power in the U.S. as part of a plan to increase domestic energy security and meet clean energy targets. In practical terms, that means an investment of $54 billion in U.S. loan guarantees for nuclear energy - loan guarantees are often used to help investors since nuclear power plants are extremely costly to set up, have uncertainty around permit approvals, and often take many years to realize a profit. Read more here.
The House Energy and Power Subcommittee approved a bill on Thursday by Fred Upton (R-Mich.), Chairman of the Committee on Energy and Commerce, to halt the EPA’s plans to regulate greenhouse gas emissions. Upton claims that the cap-and-trade legislation and other “needless EPA regulations stifle growth, kill jobs, and raise energy costs.” In December 2010, the EPA announced that it would regulate greenhouse gas emissions from power plants and oil refineries, the nation's two biggest sources of carbon dioxide (accounting for almost 40% of U.S. greenhouse gas emissions), beginning in 2011. The Energy Policy Act of 1992 called for the voluntary reporting of greenhouse gas emissions and carbon sequestration activities, but the EPA is now looking to take the next step by actively regulating these emissions. Read more here
Follow us on Twitter @YaleEnviro
Human activity and strange weather
UN Panel to streamline carbon offsets approvals
China's electric car program
Lighting roundup: Ebay, hazardous materials, GE
Palm Oil Giant agrees to protect forests
DOE Solar Initiative
Experts say food-price volatility expanding
S. Korea to start emission trading in 2013-2015
Costs of Inaction: the economics of high-end warming
Forest loss slows as Asian nations plant
Number of investors seeking water data doubles
Why Cap and Trade is good for environmental marketing