Close this search box.

Additionality criteria back on the Board’s agenda (Newsletter #10)

The Board might again tackle the impossible task of improving project-by-project additionality testing during this week’s meeting. Numerous discussions in the past about the stringency of the additionality criteria in the so-called “first-of-its-kind” and “common practice” tests have not yet resulted in an agreement. This week could see a further instalment in this ongoing struggle as the Secretariat might present new draft guidelines.

While CDM Watch has already presented some ideas in previous newsletters, we would again like to take the opportunity to present a brief overview of the problem at hand.

The current approaches for demonstrating additionality are essentially composed of three elements:

• A barrier analysis to demonstrate that barriers exist which would otherwise prevent the proposed project from going ahead

• An investment analysis to demonstrate that the proposed project activity is economically less attractive than another alternative

• A common practice analysis which requires an assessment of the extent to which the proposed project type has already been deployed.

As seen in the aforementioned cases of large hydro projects, where additionality is doubtful in almost all cases, these approaches have often been criticised as intention-based and highly subjective. The International Emissions Trading Association (IETA) stated in a position paper for COP/MOP1: “Business perception is that in its current form the test for additionality (…) exposes every project to a highly subjective assessment of its CDM eligibility and allows for second-guessing by the EB”. Similarly, a report drawn up by CDM expert Lambert Schneider evaluated 93 CDM projects and found that for a significant number of  projects the additionality seems unlikely or questionable. In a Delphi survey, conducted in 2007, 71% of the participants agreed  with the statement that “many CDM projects would also be implemented without registration under the CDM” and 86% of the  participants went so far as to state that “ in many cases, carbon revenues are the icing on the cake, but are not decisive for the investment decision”.

The “first-of-its-kind” barrier

A project activity is assumed to be additional if no similar project has been implemented previously in a certain geographical area. If a project activity is “first-of-its-kind”, no additional assessment steps are undertaken to confirm additionality. Considering that project activities that are deemed to be “first-of-its-kind” pass the additionality test by default, the application of this barrier is highly problematic. Sometimes the project technology was defined so narrowly that the project was declared to be the “first of its kind” even though many similar plants had already been constructed. In many cases, no evidence for the barriers was mentioned or provided in the project design documents.

Action to be taken by the Board: In line with the recommendation in a previous note on the “first-of-its-kind” barrier by the Meth Panel, CDM Watch recommends that in the absence of a specific definition in an approved baseline and monitoring methodology, the “first-of-its-kind” barrier 20 shall only apply if:

> The project technology has not been in commercial operation in the host country; and

> The project technology has not been proposed in another CDM project activity in the host country and published in the
CDM-PDD by a DOE for public comments.

> Other CDM project activities, including both those registered and submitted for validation, should be included in this assessment. The assessment should include all similar project activities in the host country.

The common practice analysis

The common practice analysis is an important credibility check to demonstrate that the project is not common practice in the region or country in which it is being implemented. If a project activity is “first-of-its-kind”, it is clear that implementation of the specific technology is not yet “common practice”. But similar problems as witnessed in the first-of-its kind-analysis appear in the  application of the common practice analysis: The current additionality tools do not clearly define when a project activity should be regarded as common practice and no threshold for common practice is provided. Only a few methodologies specify when a project should be considered common practice. Moreover, the baseline methodologies do not provide a clear definition of what a comparable technology is. In some cases, project parti cipants have defined their technology so narrowly that practically no or only a few other similar projects have been implemented – even though the technology type in question (e.g. biomass cogeneration with high pressure boilers) is quite common in the country. At the same time, they define the comparison group very broadly (e.g. all power generation in the country) such that the project activity will automatically have a low penetration rate – independent of the fact that the project activity is frequently implemented in similar circumstances.

The assessment of the common practice analysis by DOEs is particularly weak. The image the below illustrates that in 32% of the projects that use the common practice analysis, the validation report does not provide any information on it at all. In a further 24% of the projects the information in the PDD is essentially repeated without a clear statement that the information was checked and deemed plausible and credible by the DOE. A detailed assessment of the information provided in the PDD is only provided in 10% of the projects. This means that in more than half of the projects examined that applied the common practice analysis, the projects were registered even though independent information to support the analysis had not been presented or it was not clear if the information presented had been checked by the DOE.

Action to be taken by the Board:

Until project-by-project additionality testing is replaced, quantitative thresholds and a clear definition on similar technologies must be introduced to improve the environmental integrity of the CDM and to prevent gaming by arbitrary definitions of technologies. CDM Watch makes the following recommendations:

>The EB should work on quantitative thresholds to define common practice for each methodology. For example, a project could be regarded as common practice if it has been implemented X times AND in Y% of the relevant cases.

>If more than five projects using the same technology are operational in the host country without receiving CDM support, the project is common practice

>Baseline and monitoring methodologies should clearly define the project technology and what is regarded as a similar technology

>If a project activity began commercial operation after the PDD was submitted to the DOE for validation, the situation when the CDM-PDD is published by the DOE for public comments should be applicable for the assessment

>If a project activity began commercial operation before the PDD is submitted to the DOE for validation, the situation when the commercial operation of the project activity has started should be applicable for the assessment

>In the absence of a clear definition of the project technology in the baseline and monitoring methodology,  all technologies to which the methodology is applicable should be regarded as similar technologies



Related posts

Pricing the priceless: Lessons for biodiversity credits from carbon markets

Biodiversity markets are meant to channel private sector funding towards schemes that aim to conserve and restore biodiversity. In its current form, the unregulated funding schemes are reminiscent of the voluntary carbon market, which has a track record of supplying poor quality, cheap credits that inadequately transfer funds to the Global South. 

Going for green: Is the Paris Olympics winning the race against the climate clock?

Aware of the impact of the games on the climate and of record temperatures on the games, organisers of the Paris games have pledged to break records when it comes to reducing the impact of this mega event on the planet. ‘Going for Green’, a Carbon Market Watch and éclaircies report assessing the credibility of these plans reveals that if completely implemented, only 30% of the expected carbon footprint is covered by a robust climate strategy.

Lost in Documentation

Navigating the maze of project documentation

A new report by Carbon Market Watch has raised concerns over a lack of transparency and accountability within the unregulated voluntary carbon market caused by the unavailability of important project documents from the four biggest carbon crediting standards.

Join our mailing list

Stay in touch and receive our monthly newsletter, campaign updates, event invites and more.