A dual-level framework for evaluating integrated coastal management beyond labels

A dual-level framework for evaluating integrated coastal management beyond labels

ARTICLE IN PRESS Ocean & Coastal Management 50 (2007) 796–807 www.elsevier.com/locate/ocecoaman A dual-level framework for evaluating integrated coa...

155KB Sizes 0 Downloads 0 Views

ARTICLE IN PRESS

Ocean & Coastal Management 50 (2007) 796–807 www.elsevier.com/locate/ocecoaman

A dual-level framework for evaluating integrated coastal management beyond labels Raphae¨l Bille´ Institute of Sustainable Development and International Relations (IDDRI), 6 rue du Ge´ne´ral Clergerie, 75116 Paris, France Available online 21 March 2007

Abstract Despite many efforts in the past 10 years, Integrated Coastal Management (ICM) evaluation remains much of a challenge. Building on theoretical resources provided by three bodies of work (strategic analysis of environmental management, integrated coastal management and evaluation of public policies), this paper first underlines some of the most critical weaknesses and ‘‘black boxes’’ in current ICM evaluation. It explains why a single purpose program evaluation model is unsatisfactory, and how it hinders evaluation in practice. It then suggests ways to re-think ICM evaluation, proposing a dual-level framework that combines evaluation of ICM initiatives with evaluation of coastal management systems. r 2007 Elsevier Ltd. All rights reserved.

1. Introduction: three challenges to Integrated Coastal Management (ICM) The adoption of ICM as a key paradigm for the sustainable development of coastal zones around the world is no longer to be questioned. Born within the scientific community in the 1970s, developed in the 1980s [1–3] until the 1989 Charleston workshop [4], ICM entered the political sphere with the 1992 Rio Summit. Since then, considerable efforts have been made on all continents to further deepen the common knowledge and experience on ICM. Three main directions were taken:



Implementation: Hundreds of scientific papers and practical guidelines (see for example [5–9]) focus on what ICM means, how to do it, who should do it, with what funding, etc. Tel.: +33 1 53 70 22 74.

E-mail address: [email protected] 0964-5691/$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.ocecoaman.2007.01.002

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807





797

In other words, this abundant literature, often based on field experiences, deals with the three key questions raised by Underdal 25 years ago: ‘‘What? Why? How?’’ [3]. Reporting: Reporting and making comparisons between coastal management cases soon became a priority. Not only were comparisons important to improve the body of knowledge on ICM implementation, but reporting appeared critical to estimate trends at larger scales [10]. In addition, the adoption of ICM in dozens of conventions, agreements, treaties—some of which legally binding—made it necessary to develop clear and shared analysis frameworks that would allow regions and countries to report on the status of ICM implementation in their own jurisdiction, and relevant authorities to assess progress being made.1 Evaluation: Consistent and effective evaluation frameworks are crucial to improve both ICM implementation and reporting.

Despite several valuable efforts over the past 10 years (see for example [12], and special issue 46 of the Ocean & Coastal Management Journal on the role of indicators in ICM), ICM evaluation remains largely referred to as an important issue, rather than being directly addressed. Hence, this paper focuses on this third direction. The hereby-developed views draw lessons from our various experiences both in fundamental research and in project design, monitoring and evaluation, in developed as well as developing countries. They are part of a wider research endeavor undertaken at ENGREF2 since 1998 by a team working on strategic analysis of environmental management. One of its most fruitful focuses is the conceptualization of the difference between management actions on systems, and the way systems are actually managed. In this paper, we seek to set a fertile dialog between theoretical resources taken from this collective work, from the body of literature on integrated coastal management and from research on public policies evaluation. By underlining some of the most critical current weaknesses of ICM evaluation theory, and showing how they hamper evaluations in practice, we suggest ways to re-think ICM evaluation. The challenge of evaluating outcomes achievement rather than outputs delivery will be of particular concern to us. In this respect, we will explain why a single purpose project evaluation model is unsatisfactory, just as it is in other public policy areas [13]. We will then propose a new evaluation framework that combines evaluation of ICM initiatives with evaluation of coastal management systems, and explain how this favors sound and meaningful evaluations thus leading to better design of coastal management initiatives. 2. ICM evaluation: state of the art and critical issues 2.1. Drawing from theory and experience in program evaluation Unsurprisingly, ICM evaluation draws heavily on theory and experience in program evaluation in general. Although we shall not elaborate too much on this very broad topic, we want to underline three specific issues that tend to dominate the debates on ICM evaluation and to raise overwhelming difficulties for evaluators. 1 A good and recent example of such a commitment is the ‘‘European recommendation concerning the implementation of Integrated Coastal Zone Management in Europe’’ [11]. 2 National Institute of Forestry, Agricultural and Environmental Management; Paris, France.

ARTICLE IN PRESS 798

R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

2.1.1. Indicators Much progress has been made recently on this issue, which will not be the focus of this article. Again, the 2003 Ocean & Coastal Management special issue, or the EUCC3 ICZM progress indicator set, are significant steps forward. But having good indicators may not be what is most critical to evaluation: one must have a clear understanding first of what to measure so as to make a wise use of a complex set of indicators. We believe this is not yet necessarily the case for ICM. 2.1.2. Attribution Attributing effects to factors is one of the pillars of program and policy evaluation. As stated by the French Scientific Council on Evaluation, ‘‘what is expected from an evaluation is not only to check if objectives have been reached, but to try and identify all the effects of a given policy and to distinguish between policy-based effects and influence of exogenous factors’’ [14]. In most cases, an ICM initiative is only one among a wide range of activities taking place in a given area. Sectoral economic activities are of course abundant. What is more, the ICM initiative to be evaluated is usually not the only effort towards a more sustainable development, and frequently not even the only ICM endeavor in the area. This makes attribution particularly challenging. An ICM initiative also often tends to be quite complex and heterogeneous, having to deal with numerous issues and stakeholders. Again, this hampers a straightforward use of program evaluation theories that are based on more traditional, single-sector programs: ‘‘the preferred field of evaluation appears to be sectoral policies’’ [15]. 2.1.3. Output delivery versus outcomes achievement Facing such difficulties with the attribution issue, many evaluations of ICM initiatives limit their scope to the delivery of planned outputs—for which attribution is not a problem. As Sorensen already noted in 1993, ‘‘over the last 28 years many products (or outputs) have been generated by the many ICM efforts—such as management plans, laws, regulations, inventories, and geographic information systems. However, the extent to which the mature ICZM efforts have achieved their objectives (such as the conservation of wetlands and mangroves) is generally not known. It is relatively simple to assess a program or project in respect to its output, such as the number of coastal programs prepared by local governments. It is far more difficult to measure how the outputs influence the outcomes, such as reductions in the levels of coastal water pollution or achieving a longterm sustained yield from a coastal fishery’’ [4]. Although precisely diagnosed, this problem has not been solved yet. In most cases, delivering outputs (the means) first becomes day-today objectives for program managers (which is understandable in practice), then is surreptitiously turned into a program objective (the end). These last two issues of attribution and output/outcome dilemma are not just technical ones. When taken seriously, they usually lead to pessimism—and sometimes overpessimism—about ICM initiatives. Doing a rapid ICM meta-evaluation at the Paris 2001 Conference on ‘‘Oceans and Coasts at Rio+10,’’ Olsen affirmed that ‘‘promises of ICM were not being achieved.’’ He was referring to the ‘‘orders of results’’ he contributed to develop [16,17], classifying ICM results in four helpful categories (see Fig. 1). His point was that the ICM community could report (1) a massive delivery of projects outputs, (2) 3

The Coastal Union.

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

?

799

4th order results 3rd order results 2nd order results 1st order results

Fig. 1. Meta-evaluation and classification of ICM results at the international level. Note: 1st order: formalized institutional structures and constituencies; 2nd order: correction, mitigation of selected behaviors and/or development actions implemented; 3rd order: improvements in some social and environmental indicators; 4th order: sustainable environmental quality and quality of life.

some changes in behaviors and actions design, (3) few actual improvements in social and environmental indicators around the globe, and (4) a negligible number of coastal areas managed in a truly sustainable way. Evaluating three of the most emblematic ICM programs in France and Madagascar [18], we reached very similar conclusions. So, is ICM as a concept not achieving its promises? Or is it its implementation that fails? And what if evaluation standards that lead to such a diagnosis were inappropriate? Because ICM initiatives are supposed to bring change in the way coastal areas are managed—to make it more integrated—does it really mean they should be evaluated with respect to their impact, i.e. to their achievements in changing the way coastal areas are actually managed? 2.2. Evaluating ICM initiatives with respect to their outcomes: fair and legitimate? Focusing an evaluation on outcomes rather than outputs entails limits and drawbacks that have been described extensively by social scientists. Bako-Arifari and Le Meur [19] affirm that ‘‘institutional odd jobs, fuzzy management and confrontation of diverging logics are found in any phase of any project. This empirical acknowledgement can only lead to questioning evaluation principles that are limited to the achievement of initially set objectives. Selections and rerouting, rereading and interpretations should rather be understood as signs of project appropriation, as demonstrations of a social normality more than of a pathology.’’ Muller and Surel [20], among others, argue that ‘‘despite upsetting common understanding, one has to admit the idea that public policies don’t aim at solving problems. (y) Implementing a public policy does not mean solving a problem, but building a new representation of problems, one that sets up socio-economic conditions under which the society can solve them.’’ Considering all this, how can an evaluation based on outcomes and impacts be legitimate? Is it fair to all the people who have been involved in an initiative, to the energy they spent, and to the invisible progresses that have probably been made? These questions are all the more essential as a coastal management initiative cohabits and has to deal with forces that tend to further degrade coastal ecosystems. In many cases these degrading forces, driven by powerful economic interests, are stronger in the social and political game than any ICM effort will ever be. Therefore, why should an initiative be evaluated with respect to achievements that clearly depend on parameters, which are not controlled by the initiative under consideration? Just because it has intersecting ambitions, can it be held responsible and accountable for other stakeholders’ actions? Answers to such questions are necessarily complex. One dimension that should not be neglected is that ICM initiatives, just like others, are funded based on their explicit objectives—such as reversing the trend to ecosystem degradation in a specific area, etc. No

ARTICLE IN PRESS 800

R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

donor will ever fund an ICM program with the simple hope that stakeholders will sit around the table for a couple of workshops or even come up with a strategy paper, knowing that will have no impact at all on the field. Evaluation is therefore a matter of accountability and transparency, a dimension that cannot be ignored. However, our main answer to the questions above is of a different nature. 2.3. Evaluations of ICM initiatives do not report on ICM implementation Beyond the challenges that are faced when evaluating ICM initiatives, an important assertion has to be made on a different level: evaluating ICM initiatives does not match our needs when it comes to reporting on ICM implementation. If comparisons between areas and countries are to be made, if reporting is crucial to agreements monitoring, another approach to ICM evaluation is necessary, which will shift the emphasis from coastal management initiatives (the effort) to coastal management systems (the object of the effort). Evaluating ICM initiatives is undoubtedly useful to their managers (e.g. to improve their own action) as well as to citizens who want to know how their taxes are being spent. It can also be of great interest to the stakeholders involved in the consultation and participation processes. Nevertheless, evaluating ICM initiatives (including attributing results to initiatives inputs and outputs) does not report on whether a coastal area is or is not ‘‘integratedly’’ managed. In other words, it does not tell to what extent the actual management of a coastal area is integrated, or which ‘‘dimensions of integration’’ [21] are taken good care of, what the trends are, etc.—although these are crucial questions for sustainable development. In a way, Sorensen was opening this discussion when mentioning the ‘‘availability of alternative national programs to achieve ICZM objectives’’ as a reason why some coastal countries were not considering ICZM [4]. But taking this reason seriously challenges the rest of his paper on the ‘‘international proliferation of ICZM efforts’’ since he acknowledges that they are not the only way to make progress in ICZM. It also questions for example the recent ‘‘Evaluation of Integrated Coastal Zone Management (ICZM) in Europe’’ [22]: although the European ICZM Expert Group has set up a comprehensive set of indicators, both processand results-based, the evaluation is strictly limited to process. Concretely, it does not say a word on whether coastal management is or is not improving in Europe following the 2002 European Recommendation [11]. This brings us back to our initial remark affirming the difference between having good indicators and defining what we want to evaluate. Again, an ICM initiative does not manage the coast by itself [23], it is only one among many interventions. Some of these also contribute to a more sustainable development, but some are obviously pursuing objectives that may be conflicting with sustainable development. Furthermore, among these many stakeholders and interventions, those directly involved in ICM initiatives, in their design, management and monitoring, are hardly ever positioned ‘‘higher’’ than other players in the hierarchy of the social–political game being played locally. Despite often-favorable organization charts, in reality they are usually ‘‘lower’’, meaning less powerful than many sectoral stakeholders. Therefore, no rigorous comparison between coastal cases will be made without expanding the scope of evaluations beyond ICM initiatives. They need to be reframed within broader processes that determine the actual state of coastal areas.

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

801

2.4. Two options To summarize, evaluating ICM initiatives with respect to their outcomes leads to two types of problems. From a practical perspective (1), evaluations tend to be negative in most cases despite the money and energy spent, and despite the expertise mobilized. From a methodological perspective (2), some contradictions cannot be ignored, especially those related to the limited role that an ICM initiative actually plays in the day-to-day management of a coastal area. At this point, two options are available:





Evaluating programs outcomes is declared counterproductive and needlessly rigid. It does not take into account the complexity of socio-political processes around an ICM intervention. These processes and the intervention’s outputs have their own value, independently from their impact on the state of the environment. Another approach to ICM evaluation is developed which, without neglecting the abovementioned complexity of socio-economic processes along program implementation, allows a clarified discussion on the achievement of concrete objectives.

We believe that the first option is the one de facto dominating the community of ICM practitioners and researchers—even though it is never explicit. We consider it unacceptable since it typically leads to absurdities such as this sentence found in an important report on one of the biggest environmental program in the world, the Malagasy National Environmental Action Program [24,25]: ‘‘the program did achieve its objectives, however without influencing trends towards ecosystems degradation’’ [26]. As written by Mermet [27], ‘‘pluralism, careful listening of all stakeholders, are fundamental components of any evaluation (y). But if they lead to neglecting or overshadowing analysis and measurement of policy results, then they replace one evil by another. From a policy with no operator nor partner, we shift to a policy with no goal nor marker.’’ Therefore, the objective of this paper is to contribute to the second option. It implies getting out of the ‘‘ICM program box’’ to expand the scope of evaluation to other interventions. This means working on questions such as: Beyond ICM initiatives, are there other sources of change for a more integrated management of coastal areas? What are they? What is the role, importance and performance of ICM initiatives in the overall management system [23]? How do they synergize or compete? What consequences does and should all this have on the design of ICM initiatives? This has been the focus of our researches over the last few years, and we shall suggest a few directions in the following parts of this paper. 3. ICM implementation beyond ICM initiatives 3.1. ‘‘Management’’ or ‘‘management’’? a fundamental ambiguity It is necessary at this stage to resort to the distinction we were introducing in part 1. ‘‘Management,’’ or in our case ‘‘coastal management,’’ is an ambiguous term with two distinct meanings: the action of managing the coast (with a management intention), and the way the coast is managed, de facto, as a result of the various interventions and activities taking place (some with a management intention, most without) and constituting the management system. Let us revisit one of the most quoted articles about ICM evaluation, Burbridge’s 1997 paper [12]. The author wants to build ‘‘a generic framework for

ARTICLE IN PRESS 802

R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

measuring success in integrated coastal management,’’ but the summary introduces ‘‘a framework for measuring the success of integrated coastal management initiatives’’ and the paper ‘‘suggests that we need to develop a broadly based framework for evaluating the effectiveness of ICM initiatives’’ (p. 176). He then comes back to his willingness of ‘‘assessing progress towards integrated coastal management,’’ before again questioning ‘‘the progress that is made in the development of an ICM initiative’’ (p. 181). Burbridge’s paper is valuable in several aspects, but it perfectly exemplifies an extremely common bias in the ICM literature, forgetting everything about the above-mentioned second dimension of management. The essence of ICM is to increase the degree of integration in coastal areas ‘‘de facto management,’’ not to develop or even improve the efficiency of ‘‘ICM-labelled’’ initiatives. The difference should not be underestimated. 3.2. Thinking beyond labels The label perspective is excessively active within the community of ICM practitioners and researchers. Here again, two influential examples may be enlightening. During the 2001 ‘‘Global conference on oceans and coasts at Rio+10’’ in Paris, ThiaEng Chua, a reference to our community, suggested to ‘‘call on governments to (y) take decisive actions to ensure effective management measures for the coastal areas of each nation by committing to the following targets:

  

20% of national coastlines under management by 2012; 60% of national coastlines under management by 2022; and 100% of national coastlines under management by 2032.’’

This suggestion was adopted and reproduced in the conference ICM working group report [28]. But what does it actually mean for a coastline to be ‘‘under management’’? Is not a vast majority of coastal areas in the world already subject to protection intervention and sectoral activities? Anyway, is not any coastline at least under de facto management, by definition? The only explanation is that behind such a vision of coastal management, the ‘‘label bias’’ is to be uncovered. And what was implicit in 2001 became very explicit in 2003 with Chua willing to ‘‘be able to certify ICM sites,’’ as stated during the ‘‘East Asian Seas Congress’’ in Kuala Lumpur [29]. Our goal is not to underline a misunderstanding, even less a ‘‘mistake’’ in such or such paper, but to highlight the excess and consequences of a common tropism. To reaffirm this position, another example is useful. A study carried out in 2000 by the CSMP (University of Delaware), and presented in various occasions (see for example [30,31]) took over from Sorensen’s 1993 survey of ICM efforts [4]. Based on an inventory of ICM efforts, the authors generated a map in two colors, one for ‘‘coastal countries with ICM,’’ one for ‘‘coastal countries without ICM.’’ Their statistics showed for example that in 2000, 91% of European countries were ‘‘doing ICM,’’ whereas they were only 31% in 1993 according to Sorensen. To a certain extent, the authors are aware of the limits of such an inventory, admitting that these are process indicators and that, of course, results indicators would be necessary. On the one hand, this relegates the crucial question of results indicators in a ‘‘too hard box’’ [32] that everyone agrees should be opened. On the other hand and more critically, such indicators do not actually inform on processes, but more narrowly on the existence of something called ‘‘ICM program’’ in a given country. They are just label

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

803

indicators. What they assess may well be no more than the diffusion of a given certified instrument. 4. From initiatives to management systems 4.1. Shifting the focus from coastal management initiatives to coastal management systems As policy analysts say [20], policies can exist without dedicated structures (institutions, organizations or arrangements). And when such a structure exists, it is hardly ever the only one participating in the policy. The ICM policy in a certain area is not defined by the ICMlabelled initiative that may exist there. Within a broader framework, it becomes possible to distinguish between formal existence of an ad hoc ICM initiative on the one hand, and analysis and evaluation of the ‘‘emerging ICM policy’’ on the other hand. This emerging policy does not exist in the strict sense, however it may be analyzed: it is the way the coast is managed, on which we want to build ‘‘system-driven evaluation’’—the only meaningful evaluation to measure, report and make comparisons with respect to the integration process. Such an evaluation looks first at the coast, second at the way it is managed, and third at the ICM initiative, which is then properly contextualized. 4.2. A fundamental experience: the French wetlands policy evaluation The perspective we propose draws on lessons learned from the French wetlands policy evaluation, initiated in the early 1900s by the Interministerial Committee on Public Policies Evaluation [33]. It was innovative in the field of evaluation because it dealt with a policy that did not exist as such. As explained by Bazin and Mermet, ‘‘the wetlands policy does not look like a real, purposely designed policy, defined by specific texts, with identified resources, aiming at explicit objectives in respect to which results could be unambiguously assessed’’ [34]. This shift in evaluation paradigm faced many obstacles and oppositions [35], but eventually demonstrated its efficiency both for evaluation purposes and for action.4 The question of attribution then needs to be reexamined. Evaluating an emerging, nonformal policy cannot be based on the identification of its own effects, but on the identification of the effects of each intervention constituting the policy. The tricky question about the own effects of ICM initiatives is therefore reframed in a broader context that helps addressing it: what matters is the responsibility of each intervention in the state and trends that are observed for a specific coastal area—including interventions that lead to further degradation of coastal areas. Last, the objectives of the evaluated policy need to be redefined. Since we deal with an informal policy, as we said, it has no explicit objectives. Its set of objectives is actually to be built based on collective commitments made at various levels: relevant international conventions and agreements (such as the Ramsar Convention or the Convention on Biological Diversity), national political commitments, local formal agreements between stakeholders, etc. A consistent and precise enough set of objectives can be formulated this way, against which to evaluate the informal policy [37]. 4

Since then, more evaluations focused on ‘‘emerging’’ and non-existing policies, like the evaluation of the French foreign policy [36].

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

804

4.3. A dual level framework for evaluating ICM initiatives and coastal management systems This methodology needs to be adapted to each case by the analysts who implement it. We only propose a generic analysis framework, on which one can build an evaluation. Interestingly, it may also be used as interview guidelines when carrying out the study and meeting stakeholders. We do not pretend it to be immensely innovative by itself, but we believe its mobilization for ICM evaluation is. It is articulated around six groups of questions:

    



Issues: What are the integration issues between sectoral activities? What regulation needs can be diagnosed for managing conflicts over natural resources and space? What are the ‘‘environmental problems?’’ Instruments: What regulation instruments already exist (whatever their nature, be they laws, projects, plans, negotiated agreements, rules, decrees, etc., dealing with natural resources management, landscape, etc.)? Retrospective: how did these instruments accumulate over time? To tackle which issue(s)? Implementation: How are these instruments implemented? Through what kind of process? With what impact on the studied coastal area? Integration diagnosis: What is the level of integration in the way the coast is managed? Which integration dimensions are satisfactorily dealt with (e.g. integration between science and management)? Where are the weaknesses and inconsistencies (e.g. impact of agriculture on shellfish farming)? What is the current trend (towards more or less integration?)? Evaluation of ICM initiatives: Assessing their results against their objectives in a contextualized way.

Answering such questions looks challenging in many ways—and it is. However, we had the opportunity to implement the methodology on several occasions, and the results always seemed to be quite robust. Its replicability was also tested with promising results by master’s students on contrasted coastal areas in France [38] and India [39]. Such a framework is not isolated: it is in synergy with other recent research in environmental management that aims at reframing evaluations so as to strengthen their reporting and comparison potential. In addition, we noticed that even utilized too quickly and imperfectly because of various constraints, it greatly helps thinking and debating more rigorously on the actual progress being made towards integrated coastal management. In a more systematic and methodical way, this framework shows potential for rapid appraisals. We are optimistic that further refinement and elaboration will come with experience—and we invite the ICM community to adopt, test and discuss this framework. 5. Conclusion In this paper, we addressed the evaluation challenge as formulated by Olsen, Lowry and Tobey according to whom ‘‘there is an urgent need for an accepted ICM evaluation methodology’’ [40]. We built our approach on the fundamental dialectic between evaluating integrated coastal management initiatives and evaluating the level of integration of coastal management. We proposed an analysis framework for coastal management

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

805

systems that, compared with objectives collectively set for these systems, allows to really evaluate the actual implementation of ICM. Evaluating ICM initiatives then becomes less problematic and traumatizing since one does not implicitly assume any longer that an ICM program is the only one in charge of ICM implementation, thus accountable for it. In other words, one realizes that a coastal management program is not the only or even the main ‘‘coastal manager.’’ The objectives and results of every initiative and activity remain a central reference to evaluation, but their label does not. As we invite researchers and practitioners to question, test, and refine or reject this approach, we are aware that a significant effort is still to be made, notably as to how these new evaluations feed back into the design and implementation of ICM programs. If we believe our framework is of real help to evaluation practices, more thinking is definitely needed on the kind of impacts it may have on new initiatives, on how to set their objectives so that—ultimately—they can achieve them, and ICM can keep its promises. Last, we want to acknowledge and anticipate that this paper may sound very critical, perhaps too critical, towards a community of researchers and practitioners who have been working on ICM issues for decades now, taking an impressive step forward against heavy odds. Indeed, developing a critical approach to the existing body of work and proposing new approaches is possible only if such resources exist and are valuable. Moreover, we believe in the intrinsic importance of true critical debates over theoretical and methodological choices underpinning scientific approaches to integrated coastal management—and we hope that this paper will raise some debate. We also hope to have demonstrated the fruitfulness of confronting three spheres that seldom dialog: integrated coastal management, environmental management and evaluation of public policies. Beyond that, the thorny shift in evaluation paradigm that we suggest is not different in nature from the one that most policy domains are currently confronted with. It has become central on the agenda of evaluation specialists, with several authors suggesting in one way or another to ‘‘span the boundaries [of evaluation] from single-organization focus towards a partnership network’’, and to reframe traditional evaluations within ‘‘multi-organizational spheres of performance’’ [13].

References [1] Cicin-Sain B, Knecht RW. The problem of governance of US ocean resources and the new exclusive economic zone. Ocean Development and International Law 1985;15(3–4):289. [2] Sorensen JC, McCreary ST. Institutional arrangements for managing coastal resources and environments; 1984. [3] Underdal A. Integrated marine policy: what? why? how? Marine Policy 1980;July:159–69. [4] Sorensen J. The international proliferation of integrated coastal management efforts. Ocean and Coastal Management 1993;21(1–3):45–80. [5] Clark JR. Integrated management of coastal zones. Rome: FAO; 1992. [6] Denis J, He´nocque Y. Methodological guide to integrated coastal zone management. Steps and tools towards integrated coastal area management. Paris: UNESCO/IOC; 2001. [7] OECD. Coastal zone management: integrated policies. Paris: OECD; 1993. [8] World Bank. Noordwijk guidelines for integrated coastal zone management. Noordwijk, Netherlands: World Coast Conference; 1993. [9] UNEP. Guidelines for integrated management of coastal and marine areas with special reference to the Mediterranean Basin. Nairobi, Kenya: United Nations Environment Programme; 1995. [10] Olsen S. Increasing the efficiency of integrated coastal management. Montre´al, Canada: The World Conservation Union (IUCN) annual meeting; 1996.

ARTICLE IN PRESS 806

R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

[11] European Parliament and Council. Recommendation concerning the implementation of Integrated Coastal Zone Management in Europe, 2002/413/EC. [12] Burbridge PR. A generic framework for measuring success in integrated coastal management. Ocean and Coastal Management 1997;37(2):175–89. [13] Uusikyla¨ P, Valovirta V. Three spheres of performance governance. Spanning the boundaries from singleorganisation focus towards a partnership network. Ljubljana, Slovenia: European Group of Public Administration Annual Conference; 2004. [14] Conseil Scientifique de l’Evaluation. Petit guide de l’e´valuation des politiques publiques. Paris: La Documentation Franc- aise; 1996. [15] Nioche JP, Poinsard R, editors. L’e´valuation des politiques publiques. Paris: Economica; 1984. [16] Olsen S, Tobey J, Kerr M. A common framework for learning from ICM experience. Ocean and Coastal Management 1997;37(2):155–74. [17] USEPA. Measuring progress of estuary programs. Washington, DC: US Environmental Protection Agency, Office of Water; 1994. [18] Bille´ R. La Gestion Inte´gre´e du Littoral se de´cre`te-t-elle? Une analyse strate´gique de la mise en œuvre, entre approche programme et cadre normatif. PhD. dissertation, ENGREF-CIRED/RGTE, Paris, 2004. [19] Bako-Arifari N, Le Meur PY. Une anthropologie sociale des dispositifs du de´veloppement. In: Bare´ JF, editor. L’e´valuation des politiques de de´veloppement. Approches pluridisciplinaires. Paris: L’Harmattan; 2001. p. 121–73. [20] Muller P, Surel Y. L’analyse des politiques publiques. Clefs. Paris: Montchrestien; 1998. [21] Cicin-Sain B, Knecht RW. Integrated coastal and ocean management. Concepts and practises. Washington, DC: Island Press; 1998. [22] Rupprecht Consult, International Ocean Institute. Evaluation of Integrated Coastal Zone Management in Europe. Final report, 1 December 2006. [23] Bille´ R. How necessary are ICM-labelled organisational tools in addressing integration issues? Case study on the Bay of Brest (France). International conference people and the sea: Maritime research in the social sciences. An agenda for the 21st century. Amsterdam: Center for Maritime Research, Netherlands Institute for the Social Sciences; 2001. [24] Bille´ R, Mermet L. Integrated coastal management at the regional level: lessons from Toliary, Madagascar. Ocean and Coastal Management 2002;45(1):41–58. [25] Bille´ R, Mermet L. Sectoralization of an integrated coastal management program. A case study in Madagascar. Journal of Environmental Planning and Management 2002;45(6):913–26. [26] Anonymous. Evaluation externe du PNAE et du PE2. Rapport de synthe`se, 2000. [27] Mermet L. L’analyse strate´gique de la gestion environnementale, illustre´e par les tribulations d’un noyau relictuel de population d’ours brun dans les Pyre´ne´es occidentales franc- aises. Paris: Document de travail interne RGTE; 1998. [28] Pullen S, et al. Towards the 2002 World Summit on sustainable development, Johannesburg. Reports of the conference working groups. Paris: UNESCO (IOC), University of Delaware (CSMP); 2001. [29] Thia-Eng C, et al. Measuring the performance of Integrated Coastal Management Programmes. Putrajaya, Malaysia: The East-Asian Seas Congress; 2003. [30] Cicin-Sain B. Research needs for ICZM: an agenda for the 21st century. In: Keynote address to the international conference people and the sea: maritime research in the social sciences. An agenda for the 21st century. Amsterdam: Center for Maritime Research, Netherlands Institute for the Social Sciences; 2001. [31] Cicin-Sain B, et al. Status and prospects for integrated coastal management: a global perspective. Mediterranean Integrated Coastal Area Management. Nice: UNESCO—University of Nice; 2000. [32] Coastal Zone Canada Association. Baseline to the Coastal Zone Canada 2000 conference. Saint John, New Brunswick: Coastal Zone Canada Association; 2000. [33] Comite´ Interministe´riel de l’Evaluation des Politiques Publiques. Les zones humides: rapport d’e´valuation. Paris: Commissariat Ge´ne´ral du Plan, La Documentation Franc- aise; 1994. [34] Bazin P, Mermet L. L’e´valuation des politiques ‘‘zones humides’’ de 1994: son origine, son de´roulement, ses re´sultats. Anales des Mines 1999:79–89. [35] Lascoumes P, Setbon M. L’e´valuation pluraliste des politiques publiques. Enjeux, pratiques, perspectives. Paris: GAPP-CNRS, Commissariat Ge´ne´ral du Plan; 1996. [36] Kessler MC. L’e´valuation de la politique e´trange`re. L’exemple franc- ais au crible de la science politique. In: Bare´ JF, editor. L’e´valuation des politiques de de´veloppement. Approches pluridisciplinaires. Paris: L’Harmattan; 2001. p. 23–51.

ARTICLE IN PRESS R. Bille´ / Ocean & Coastal Management 50 (2007) 796–807

807

[37] Leroy M. Gestion strate´gique des e´cosyste`mes dans un contexte d’aide internationale: engagements environnementaux et dispositifs de gestion dans la valle´e du fleuve Se´ne´gal. PhD dissertation, ENGREFCIRED/RGTE, Paris, 2004. [38] Re´rolle T. Diagnostic de l’inte´gration de la gestion du littoral dans le Bassin d’Arcachon. Paris: Ecole Polytechnique/ENGREF; 2001. [39] Teoh SC. Analyse de la mise en oeuvre des objectifs d’inte´gration de l’environnement et du de´veloppement. Le cas de l’Inde: de la coope´ration internationale a` l’action locale. Paris: CIRED-RGTE, ENGREF-EHESS; 2001. [40] Olsen S, Lowry K, Tobey J. A manual for assessing progress in coastal management. Narragansett, RI: USAID/SIDA; 1999.