Categories for Environmental Studies

Abandoned Oil Wells – End of Field Life


As petroleum, production continues to decline in many parts of the globe, more operators are seeing well abandonment as a reality. Drilled wells are plugged and abandoned for different reasons of which the typical and operational reason is that the well has reached its economic limit or when drilled it was found to be a dry hole. (refer). According to Ide, T., et al 2006, well bore is taken to be high fluid transmittal pathway. Even with the current procedure of sealing and abandonment, individual wells have the tendency to loose their integrity due to various factors, which include but not limited to poor cementation, poor or ineffective plugging, and increase in formation pressure after abandonment, corrosion of casing (refer).

Safe and economical well abandonment are important to the industry from environmental and financial standpoints. Improper abandonment can require re-abandonment procedures to mitigate environmental contamination or to comply with updated regulations, causing an increased financial burden on the operator.

1. Introduction


All wells drilled have a distinct life cycle with respect to its cost, duration, recovery, and value. Although these characteristics and attributes are specific to an individual drilled well, all producing wells pass through the same initial and final state, beginning with completion and ending with abandonment. After the drilling stage of a well and the target depth is reached, a decision to complete the well is made based on the reservoir attributes: is the well dry or is the hydrocarbon in place of economic value. Ultimately, every well becomes dormant because of reduced economic returns or technical problem. When a well stops producing, it either may be shut-in (SI), temporary abandoned (TA) or permanently abandoned (pa).

With ageing fields fast approaching their economic limit, abandonment is becoming increasingly frequent and many operators have to modify their abandonment procedure to fit the Wellbore condition and make certain that abandoned wells remain permanently sealed and prevent commingling while balancing the environmental objectives of abandonment and cost of actual abandonment. Wells, which are not abandoned appropriately, can become a major hazard to the underground source of drinking water and possibly the aquatic environment [8]

Shut in status (SI)

When a well is shut-in it is still flowing but its Christmas tree, SV, wing valves are all closed. Usually a well is shut-in if there is a technical or operational problem, which is believed to be temporary. There is no maximum time for a well to remain in shut-in status as long as it is regularly maintained according to regulatory requirement and procedures.

Temporary abandonment status (TA)

A well is said to be temporarily abandoned when the wellhead is removed and the producing interval is isolated with a plug and the casing is plugged below the mud line.


There are various reasons why a well is abandoned, these are:


Drilled wells must at one point in time be abandoned. Before a well reaches the point at which it has to be abandoned it passes through various stages in its life cycle; it begins with the survey and exploitation of an area for signs on hydrocarbon [1]. This leads to a rewarding and exciting discovery of an accumulation of hydrocarbon deposit. This is followed by the acquired Data Processing stage and finally the drilling process. During drilling, the well is created by the use of drill bit and cased off at specific as drilling progresses.

Another fulfilling target is reached when the first hydrocarbon is produced a process which unfortunately eventually proceeds the declining period where the rate of hydrocarbon production decreases. However, successful enhanced oil recovery techniques often than not make this stage rewarding financially as it extends the life of the well [1].

When all enhanced oil recovery technique has been employed, and the cost of producing the well is no longer economically viable, the next process is abandonment; a stage not so welcomed by most operators as it means the cessation of production. Dry hole Abandonment

A drilled well is also abandoned when after drilling, the hole was found to be a dry hole.

Though abandonment is meant to be a permanent termination, the effect is felt for many years more than that of the short producing life of the well.

The main goal of any plugging and abandonment is to provide a permanent and effective isolation of fluids all along the subsurface formation in the different layers where they were enclosed prior to plugging, thereby preventing fluid migration and reduce environmental risks of contamination and prevent costly remedial jobs [1]. To achieve this several significant intervals of the well must be filled and tightly closed with a sealant material from bottom hole to the surface with special attention paid to the production interval [4] and zones of high differential pressure and temperature. The material used for plugging differ depending on what type of well is being abandoned, for oil and gas well the material used is normally cement based materials, for water well, cement based as well as bentonite can be used to isolate the different intervals[4][ PUT UNDER CEMENT PLUG CHARACTERISTICS UNDERE INTEGRITY OF ABANDONED WELLS. The integrity of the abandoned well can fail for very many reasons such as plug failure, poor slurry design etc. A cement plug can fail to set at the desired location as cement slurry often has the tendency to fall through the lighter drilling fluid below it [9]. Failure can also be as a result of downhole changes which may occur after the well has been abandoned [8].

Over the years, techniques for drilling and completion of hydrocarbon wells have continued to evolve. This drive for new technology for hydrocarbon recovery is due to the need to maximize hydrocarbon recovery while protecting the environment [4]. The evolution of well abandonment techniques has been much slower than that of drilling and completion. This is because abandonment is considered a sunk cost [4].

Project Objective

The objectives of this thesis is to review the factors which contribute to the overall integrity of abandoned wells. These include, well parameters, cement placement techniques, casing integrity. These play an important role in the design, construction and actual execution of the abandonment project. In the abandonment of wells, the factors that contribute to the integrity differ depending on the wells. This is because each well is a unique entity and hence has to have independent well abandonment design.


In this work I intend to talk about

For instance, in a well where a fish is lost in hole the abandonment design has to taken into consideration remedial action or ways to set the cement plug as there may be no access to the bottom hole to set a bottom plug in the sump.

Abandoned wells can be a cause of concern due to their potential to act as path for flow between formations, which under normal circumstance are isolated including underground sources of drinking water, of great concern are those abandonment’s with faulty plugs, compromised casing and those having cracks in the cement [7].

This work is aimed at highlighting the different factors which contribute to the integrity of an abandoned well. ( reorder and rewrite)

Chapter 2

Literature review

Well abandonment has come a long way since the first discovery of oil and gas, with the increasing awareness of the importance of environmental protection, the need to improve the processes of abandonment has now become a major concern for many operators, as abandoned wells are considered a possible conduit for fluid flow between different formations. According to C. H. Kelm et al, the objective of abandonment of a well must taken into consideration the need to do so in a best practices manner by examining the following fundamental aim of any abandonment process;

  • The need to protect any hydrocarbon left in the pay zone of the formation drilled.
  • The need to preserve and prevent contamination of freshwater zones (for onshore rigs) penetrated during the course of drilling the well.
  • Avert of any contamination of the surface environment. For instance, in the case of vegetation, air pollution and marine environment.
  • The need to abide by all regulatory requirements stated in during the abandonment.

In the past years many papers has been published on areas ranging from alternative plugging technique, self healing and expandable cement, improved cement slurry design, placement technique with the aim to reduce the cost of abandonment and improve the abandonment. Abandoned well in an oil field are sealed using a plugging material according to regulatory requirements. A perfect example of a plugging material in the ideal sense according to D.G Calvert et al 1994 is one, which can be pumped down the drilled hole, has the ability to harden in a reasonable time, and bond with the walls of the drilled formation and casing in order to prevent fluid flow from one formation to another. While regulations vary from place to place, the general practice involves plugging the Wellbore with a Portland cement material specifically designed for the isolation purpose. In his review of plugging and abandonment techniques, D. G. Calvert et al, stated that the cement mixture used in oil and gas vary depending on the type of hole is to be isolated.

Very few papers has been published that focuses on the integrity of the actual well after abandonment. Liversidge, D. et al. in his work on permanent plug and abandonment solution for the Northsea he presented case histories of the Brent South field abandonment project done using both class G cement with an expandable agent system and flexible cement according to the current stringent regulation.

Cement integrity preservation during well completion, production phases as well as during abandonment is of critical importance for long-term protection. In the past years numerous papers and texts in the area of cement sheath failure, improved flexible and expanding cement and related topics have been published, indicating the increasing need to improve well abandonment and reduce cost. Examples of works published include but not limited to (Bosma et al 2000), (Ravi et al. 2002), (Glessner et al., 2005), (Mainguy et al., 2007), (D. G calvert et al., 1994), (Locolier et al., 2006),( Liversidge et al., 2006). Although many papers have been written, very little work has been done to investigate the cement plug integrity after abandonment. The ascribed cause to this may be that permanent abandonment is considered a non-profit venture.

Mainguy M. et al., 2007 carried out an analysis of the probability of failure of cement plugs when subjected to varying compressive and tensile load using an ideal reservoir model designed to suit changes in the downhole condition. In his study he identified that there is a greater tendency for the material used to seal zones for abandonment to fail in wells situated where there is instability in the pressure, temperature and stress state due to changes that occur downhole. He concluded that when the plug is subjected to maximum tensile stress it failed due to the low tensile strength of the conventional class G cement. Though he suggested the use of pre-stressed cement as they adapt more to changes downhole, his work did not cover the problem of rock-cement de-bonding which is a problem that greatly reduces the sealing capacity of cement. In the study done by R.C. Smith et al., 1984, on the successful method of setting cement plug, he investigated the ongoing failures of cement plugs due to the instability caused by the difference between the density of the cement and the drilling mud. In his work, he suggested the use of mud thickened with bentonite before spotting the cement so as to allow a greater density difference. With respect to the problem of controlling the direction of flow of the cement slurry a diverter placed at the end of the tubing to help divert the direction of flow and improve stability. Drilling fluid can also be used as a plugging material by adding a cementitous additive. The additive can either be ‘fly ash’ of blast furnace slag which have the characteristics of a cement as they harden when the mixed with water.

Cement is not naturally occurring but manmade and like any other manmade material, it is expected to age, wear-off, and, degrade overtime under different subsurface condition, which may differ from the time it was initially set [W. Zhou et. al 2005].

Plugging oil well is a common operation, which is increasing as mature field reach the end of their producing life. In general, plugging and abandonment of a well involves filling a certain length of casing or open hole with a volume of cement mixture designed for it in order to provide adequate sealing against upward migration of formation fluid. After the cement plug is place in the desired location it is left to harden over time. The placement of the cement plug is a major part of abandonment, as failure of this will cause commingling of fluids from different formation. The setting and spotting of cement plugs can be done in various ways depending on the wellbore condition and regulatory requirement.

A review of the worldwide acceptable plugging procedure shows the a minimum of three cement plugs is required of which two are, the first plug is put in place by squeezing the cement plug through the perforation into the former producing zone in order to seal off any further influx of reservoir fluid into the Wellbore[2]. The second plug is usually set towards the middle of the Wellbore or near a protective casing shoe. Finally the third plug is set about200- 300ft below the mud line. In general, the length of a plug ranges from 100to 200ft depending on the regulatory requirements. Any additional plug set is dependent on the well bore condition.

Although observations and studies show that cement plugs have the ability to perform as expected for up to several decades, uncertainty exists that the material can maintain its isolation integrity for several thousands of years. Recent study shows that abandoned wells in which CO2 was used in the enhanced oil recovery technique prior to abandonment have the potential to leak and allow CO2 migration notwithstanding the fact that the well has been properly abandoned [Scherer, G.W et al, 2005]. This is mainly due to corrosion and degradation of the casing and cement. This degradation and corrosion occurs when carbonic acid formed from the dissolution of CO2 in brine attacks the cement and casing [Scherer, G.W et al, 2005] a process, which is dependent on the temperature of the formation, cement composition, brine and the rock mechanics and composition. Potential leakage of reservoir fluids through degraded cement plugs is hence of primary concern.

Various work on inter-formational flow shows that there is still the possibility of flow between formations even with a successful plugging of different interval. This case can arise when the abandoned well is near an active well. Javandel et al developed the first analytical model; their model showed the possible of flow to an upper formation in response to a lower injection pressure build up in a lower formation. Striz and wiggings carried out further work by developing a coupled model to predict flow using a steady state approach to create a transient flow. This model can be used to developed abandoned fluid flow using available field data.

In recent studies, statistics show that in the US one in every three well drilled for hydrocarbon is dry and have to be plugged and abandoned[D.G Calvert, et al 1994]. Wells are drilled for various reasons ranging from industrial, oil and gas, to municipal uses, but in the end these well have to be abandoned [D.G Calvert, et al 1994]. Some wells were abandoned before any regulation and guidelines were defined, these wells may have either been plugged improperly or not plugged at all and these now poses a threat to the quality of the groundwater. For the aim of regulating bodies to be achieved i.e. achieving underground water protect and hence environmental protect, the operating companies must understand that following the different regulatory requirement alone is not sufficient to guarantee a lasting protection of the environment [4].

It is sometimes difficult for operators to abide by the regulatory requirements as well as developing a plan which would both serve to seal off the reservoir and provide long-term protection of the environment while justifying the overall cost in general [4].

Currently there is a high rise in abandonment of ageing and mature field which either have reached their economic limit or are no longer producing (refer).


The initial stage of a decommissioning process is the plugging and abandonment of the wells, during this stage, the tubing, casing strings, and, conductors are cut below the mud line and removed, zones are sealed with cement plug to isolate the flow path between the reservoir fluids and other zones as well as the surface. Zones not sealed with cement plug are filled with mud with fluid having the proper weight and consistency to prevent movement of other fluids into the wellbore.

Most abandonment’s follow a general methodology that is adjusted to meet individual well requirements. As procedures can and do change significantly for each well, cement plug design should frequently be attuned to reach minimum wait-on-cement (WOC) times with varying downhole conditions. Near-wellbore geology should be assessed, and the wellbore and annuli properly cleaned to avoid microannuli and poor cement bonding. Traditional techniques include cement squeezes, gel squeezes, and mechanical plugs such as bridge plugs and packers. Cement and gel technologies are mainly used for behind casing repair, and mechanical options are usually confined to plugging the casing.

In the general process of abandonment there as basic steps which are followed to ensure successful plug and abandonment program. This includes the planning process, wellbore equipment testing, designing, well geometry assessment.


The most essential decision after when to abandon a well [11] is how. Preparation is a key ingredient in plug and abandonment of a well. In order to abandon a well successfully careful planning and effective plugging and abandonment procedure is crucial to prevent gas or fluids from moving to the surface or to other subsurface formations. In addition to the environmental risks that come with poor seals, corrective plugging may be necessary, increasing the cost and difficulty of abandoning a well. However, operators and service companies have several options for obtaining complete, permanent abandonment.

For every well, there is a variation as each well P&A is unique and different. The techniques used to achieve this process are generally based on industry practice, research, and conformance with the relevant regulatory compliance requirement. The synthesis of practical knowledge, current technology and regulatory requirements results in the most effective wellbore plugging and abandonment possible.

Wellbore equipment testing.

A preliminary inspection and survey of the wellhead and wellbore condition is carried out to determine if the valves on the wellhead are in operable condition, if it is found not to be in operable condition they are hot-tapped. The wellbore is surveyed using a slickline unit to check for any obstructions in the well, to confirm measured depth and also to gauge the internal diameter of the tubing. After the survey and removal of the slickline, the annuli and tubing is filled with fluid using a well pump is installed at the wellhead to ascertain an injection rate into the perforations. The tubing and casing are also pressured up to check for integrity. Casing annuli are also pressure tested to check for communication problems between casing strings and to record the test pressure over a period of time. The integrity and reliability of the primary cement is assessed in order to ensure that the cement sheath is still providing isolation across the reservoir and the cap rock.

A well control plan is designed to establish reservoir condition and subsequently the contingency responses to any event which may occur during the abandonment process.


Prior to plugging and abandoning a well, a review of the existing well design, record of past work, previous well performance and geologic and reservoir condition is carried out by the operator. The investigation of everything that may relate to the health and safety issues as well as regulatory requirements is also performed, after which the design of the abandonment program begins. The design is done based the existing wellbore and reservoir conditions depending on the findings from the review and investigation. This allows the operator to plan an abandonment program that will satisfy the goal of making the well safe from future resources. P&A design needs to be integrated in the planning of the well, and should be considered in a single budget. There are many factors which must be put into consideration in order to design an effective abandonment program , such as, the reservoir status, the integrity of the primary cement, hole cleaning and cement placement technique, the temperature and pressure of the well, the type of fluid in the well, the age of the well, the status of the cap rock.

  • Fluid Type

Drilled wells produce fluids in liquid and gaseous form, wells which contain sour fluids i.e. sulphur rich would be expected to have accelerated corrosion rates and stress cracking depending on the age and wellbore construction, may impair the capacity to perform plug and abandonment, to mitigate this components which are corrosion resistant can be used.

  • Reservoir status

In the design of P&A, it is necessary to consider the reservoir status concerning its stability, the current pressure and temperature, the pressure at the initial stages of well development and the permeability of the reservoir both horizontal and vertical. With the information, plug and abandonment is then designed to withstand the pressure of the well after finally reach equilibrium.

  • Cap rock Status

It is also necessary to take into consideration the cap rock status i.e. is it still impermeable, has production activities induced fracture or has weathering taken effect.

Placing the Plugs

After the design and planning of the abandonment program, calculations must be made to determine the amount of cement required and the number of wiper plugs needed to separate the cement plugs from the rest of the fluids. The use of wiper plugs enables the formation of a stable platform on which the cement can be set. A wiper plug is placed in the wellbore, and then a predetermined quantity of cement slurry is pumped on top of it. Because of its weight, the slurry becomes a driving force. The slurry falls to the bottom of the hole, pushing the wiper plug ahead of it and forcing existing air and produced fluids back into the formation. Another plug and perhaps a bit more cement finish the job. In most wells, where there is one permaeble zone, one plug and one volume of cement and the surface plug are all that is needed. In other wells, additional wiper plugs, additional cement slurry, and probably spacers of water or drilling fluid are used consecutively until all of the air and fluid is forced out into the formation, there is zilch pressure on the pipe, and it is apparent from the returns that the whole wellbore is appropriately sealed. The quantity and kind of spacer fluid that can be used is dependent on individual state regulations. The remaining casing at the top of the well is cut off 3 ft below ground level.

Along with this general methodology, each region stipulates its own abandonment methods based on field conditions and local regulations as can be seen in the following examples.

P&A steps in Los Angeles Basin in as follows [12]:

The abandonment program is prepared with the support of a qualified engineer.

A schematic showing the current mechanical condition of the well is prepared.

The geologic condition of the well, including the structure, faulting, and producing zones is assessed.

The depth and position of cement plug that will cover the producing zones and any potable water zones if applicable is measured and verified.

Choice of whether to use perforating or cavity shots is made.

The casing is pressure tested after setting cement retainers.

The different equipment required for the job is determined and assembled.

Estimate of abandonment/re-abandonment costs is made.

In contrast, the steps followed for the Hutton tension-leg platform (TLP) in the East

Shetland Basin of the North Sea involved three phases [13]:

Perform standalone wireline intervention.

Perform drilling unit intervention to set the cement plugs after the first wireline plug has been set.

Cut casing 10 ft below the seabed and recover casing stumps.

Another abandonment performed in the North Sea followed a different procedure [14]:

A permanent cement primary barrier placed immediately above the reservoir.

A secondary barrier placed as a back-up to the first barrier.

A third barrier then placed near the surface to isolate shallow water-bearing sands.

Severed completion tubing and recover wellhead.

In Western Canada, the traditional abandonment procedure of wells with casing vent flows included the following:

The source of the casing vent flow is estimated or determined.

If the source zone is shallower than the producing zone, the producing zone is abandoned.

The source zone is perforated. Depending on the feed rate obtained at the estimated source depth, either a bradenhead or a retainer squeeze is performed.

Retrievable tools are used as required.

Typically, Class G cement with Calcium Chloride and some fluid-loss control is utilized.

The slurry is placed and a static squeeze pressure of 7 MPa is attempted.

As needed, cement is drilled out and perforations are tested for seal.

Often, several attempts are made in order to obtain a static squeeze pressure of 7 Mpa on surface or mitigate the casing vent flow.

Techniques for Abandonment

The techniques used for plugging and abandonment of drilled well worldwide are generally based on industry practise. These techniques include;

  • Rig
  • Coil tubing unit
  • Rigless abandonment


The flexibility of coiled tubing has recently been tailored to develop rigless abandonment [15,16]. This method, focuses on harmonizing all well services to accomplish utmost efficiency. Coiled-tubing unit [fig….] abandonment, like any other method, is more effective when appropriate cementing procedure is used from the kick-off of the well, from original zonal isolation with the primary cement sheath to plugging and abandonment. Early prevention of microannulus can help operators obtain a complete final seal.

Five main criteria are recommended for optimal abandonment performance with coiled tubing:

  • Mobility; All equipment should be mounted on wheels for increased mobility.
  • Self-sufficiency; the service company provides nearly all activities.
  • Dry location; Fluids are not drained on or near the wellsite.
  • Single operation; the job is completed in one visit to the wellsite.
  • Low mileage; Move time is reduced and transport optimized for maximum efficiency in unit and camp moves.

In this abandonment technique geological consideration like the type and condition of the reservoir and caprock formations are take into account. Permanent seals typically must be made between producing zones and at impermeable caprock formations. The condition and configuration of cement, perforations, tubulars, and downhole equipment are also considered.

In addition to providing complete, permanent seals, the use of coiled-tubing can help increase abandonment efficiency. This method can provide the following advantages:

  • Increased tripping speeds
  • Increased rig-move efficiency
  • Precise placement of cement plugs; exact spotting of plugs at the interval of interest even in deep well as coil tubing can be reciprocated while pumping.
  • Suitable for use on live wells; it is possible to run CTU for remedial cement squeeze in live well as the wellbore can be controlled using the BOP and stripper assembly.
  • No need to pull production tubing; existing tubing and wellheads do not have to be removed to access the producing interval.

Success using the coil tubing method has been recorded in Oman.


In the early years on the oil and gas industry, many wells were drilled and some were found to be dry and subsequently were abandoned without much consideration given to the manner in which the wells were abandoned. Sometimes tree stumps were thrown in the well as a means to plug it [3], during this era the preservation of the groundwater, in general, the environment was not a major issue, and there was no defined regulation by the oil states or agencies. During the tail end of the 1930’s different states and agencies in the US started establishing regulations, this defined requirement to ensure better well abandonment [D.G Calvert, et al 1994].

The number of regulation guiding well abandonment has risen along with the rising need to protect the environment in countries around the world. Today most countries have some form of regulation that addresses well abandonment requirement; though these regulations are not uniform and differ from country to country and body to body, they provide a minimum standard for operating companies. For instance for the state of California in the United States of America, the different governing bodies have their own regulations which are as follows;

  • Minerals Management Services (MMS): The basic plugging requirements are found in 30 CFR 250.110 Subpart G.
  • Department of Conservation, Division of Oil: Gas, and Geothermal Resources (DOC)
  • The California Code of Regulations Title 14 Division 2, Chapter 4 beginning with Section 1745 focuses on the fundamental plugging requirements.
  • California State Lands Commission (CSLC): The fundamental plugging requirements in the California Code of Regulations Title 2 Section 2128(q).

Abandonment in the North Sea

In the North Sea as in US, the regulations differ. The different countries that make up the North Sea have their different governing bodies and subsequently different regulation. The law in the UK, Norway, Denmark and Holland holds the last operator of a well accountable and responsible to pay for all the cost incurred in permanently abandoning the well. It also holds them accountable for any leakage and any clean up that may be required in the event of a leak.

Abandonment programs in the Northsea are designed to meet the guidelines for abandonment issued by the operation association or government. For the UK sector of the north sea, abandonment guidelines is issued by the UKOOA, similarly for the Norwegian sector the guidelines are contained in the NORSOK/PTIL D-010 standard and for the Netherlands it is contained in the Dutch Mining

Environmental Impact Assessment (EIA) Planning Process


“Environmental Impact Assessment (EIA) is 20 year old tool for environmental management, not living up to its full potential”. (Mudge, 1993).

This chapter describes the Environmental Impact Assessment (EIA) planning process as conventionally depicted in subsequent EIA texts and guidelines. EIA characteristics and objectives are first presented because EIA planning process characterisations are interdependent with assumed EIA characteristics and objectives. Following the depictions of EIA characteristics, EIA objectives and the EIA planning process vary greatly from source to source. These variations are more the result of the varying perspectives of different authors than clearly defined schools of thought. Although, there has been a pro- process of evaluation over the past two decades, there also are many instances where elements suggested in earlier works have not been incorporated into most recent portrayals.

This overview of the conventional EIA planning process is a point of departure for the modifications and refinements discussed in later chapters of this research. Also, the conventional portrayals of EIA characteristics, EIA objectives and the EIA planning process will be revisited in later chapters, taking account of combined implications. The following are EIA characteristics as commonly depicted in introductory EIA literature and guidelines;

As a field of study EIA draws upon many social and natural science disciplines (Jain, Urban and Stacey, 1977). Drawing upon diverse disciplines’ is necessary to understand the significant aspects of the environment in order to predict how those environmental attributes may change over time – with and without a proposed action; Boundaries between, and links to both traditional disciplines and to other transdisciplinary and transprofessional fields such as planning (Lawrence 1992). EIA must transcend individual disciplines if a holistic image of the environment with and without a proposed action(s), is to be presented. Hence, EIA should not be viewed as a transdisciplinary field.

EIA consist of structural approaches and set of procedures in order to ensure that environmental factors are considered in planning and decision making (Clark1981a). In this regard EIA is a normative procedure that seeks to identify natural and social environmental norms or ethical standards and to infuse these into planning and decision making.

In the definition of Environmental Impact Assessment, the “impact” element is often prefaced by one or more dimensional distinctions, such as; positive and negative (Mitchell and Takheim 1977; Rau and Wooten 1980); time ( short term, long term, frequency, duration); space (on-site, off-site); direct and indirect, quantitative and qualitative; individual and cumulative; and likelihood of occurrence (Rau and Wooten 1980). While the ”assessment” component of EIA includes “analysis” “synthesis” and “management”- Analysis involves data collection and compilation, the identification of likely environmental conditions and interactions among environmental conditions and systems (Mm 1979; Munro et. al 1986; Amour 1990; Erickson 1994) und the description, measurement and prediction of likely effects and interactions among effects. Synthesis includes the interpretation of the significance of affects and interactions among them (Munn 1979; CEARC l988b) and the aggregation and evaluation of individual and cumulative effects (Cumulative Environmental Assessment – CEA) both with and without mitigation (Westman 1985; Lang and Annour 1981; Armour 1990; Erickscm 1 994; Shoanaka 1994). Management includes mitigation (Jain, Urban and Stacey 1977) compensation and local benefits (Amour 1990), the management of residual impacts (CEARC 1988b), monitoring and contingency measures, and communications/ consultation activities (CEARC 1988b).

In summary, EIA is a process that identifies, predicts, evaluates and manages the potential (or real) impacts of proposed (or existing) human activities on both the human and natural environment. The EIA planning process includes analysis, synthesis, management, communications and consultation activities. The consequences of such activities and their alternatives will result in specific impacts.

Underlying EIA practice are usually implicating application assumptions. Formal or informal institutional mechanisms are, for example, anticipated to be in place to help to compel, or at least facilitate public or private proponents to initiate and complete an EIA planning process and the necessary documentation, as a perquisite to project approval. Along with perquisite methods it is expected that a systematic planning process can be devised or adapted for analysing and synthesizing the appropriate data and for involving relevant agencies and the public.

Further assumed that: there is appropriate expertise to tackle the necessary technical work and to review whatever the outcomes of the planning process; there is a basis for choosing among alternative plans and for deciding if an undertaking should or should not proceed; the people who make the decision will rationally use the information provided to guide their actions; the requirements for approvals can be enforced and the impacts managed if unforeseen impacts occur; the contingency measures can be instituted. These application assumptions have been increasingly challenged in the EIA literature and in decision of courts and hearing panels and boards. The expectation that knowledge and expertise are sufficient may be especially dubious in situations characterised by emerging technologies, poorly understood environments and complex inter relationships within and among proposed actions and components of the environment.

The extension of EIA from the conceptual to the applied pre-supposes that EIA must also be a transprofessional field of practice, EIA comprises of a core body of knowledge, skills and methods. Social and natural sciences provide the initial knowledge base- EIA seeks to integrate and, thereby transcend, the inputs and insights of a range of professions with expertise m the proposed action, the environment and their interactions, within a public policy setting. Frameworks, procedures and methods have been formulated and refined through practice, which over the years, has resulted in the emergence of EIA as a recognized area of expertise.

EIA is a planning tool (Bisset 1983; Clark l9û3a; Smith 1993). It is a form of applied policy analysis or more specifically, a form of resource management and environmental planning (Smith 1993). Consequently, the formulations and applications of environmental planning processes is one aspect of EIA. It, therefore, tends to be assumed that the EIA planning process should be anticipatory (prior to decision-making), systematic or orderly and rational. The results and conclusions from the EIA planning process should also be documented, generally in the form of an EIA report or statement.

EIA is a generic planning process intended to contribute environmental information to decision-making. It provides a regulatory basis for forcing the explicit consideration of environment concerns by public and private decision makers. As such EIA forms a part of the institutional fabric through legislation, public policy or administrative procedures. Institutionalisation requires mechanisms to prepare, review and document the process, to coordinate inter-agency and private/public interactions, to adjudicate disputes and to monitor and enforce compliance.

This dissertation therefore takes up this theme to investigate the effectiveness of EIA in the Skye Bridge project by considering the planning process and by using literature review as a means of analysis and research.


On July 3, 1988, European Union (EU) Directive 85/337/EEC (Directive) came into force and as a result, Environmental Impact Assessment (EIA) became a part of the EU’s environmental protection plans. The Directive requires that before consent is given for the development of certain “public and private projects that are likely to have significant effects on the environment,” an assessment of those effects must be compiled and considered by the developer and the authority in charge of approving the projects. By asking decision-making authorities to ponder likely environmental harm before the harm occurs, the Directive promotes a policy of preventing environmental harm. The comprehensive effectiveness of mandating pre-consent environmental impact assessment is undercut, however, because the Directive textually exempts national defense projects from its process. This study suggests that the European Union could and should include national defense projects in its EIA law. Part I of this Chapter will provide a summarized, chronological evolution of environmental policy in the European Union. Part II will give a description and history of EIA law, including that of the United States, so as to provide a comparative and contrasting point of reference. Part III will propose a way by which the European Union can more fully live up to the preventative approach that it has espoused for environmental protection by requiring environmental impact assessments for national defense projects. This Chapter concludes that the inclusion of national defense projects in the EU’s EIA law would broaden the scope and effectiveness of EIA law and environmental protection generally.


2.1.1. The Evolution of Environmental Policy in the EU The 1957 Treaty of Rome (Treaty), which established the European Economic Community, focused on the creation of a common-trade zone. Accordingly, the Treaty failed to make any explicit statements regarding policies for environmental protection. In fact, until 1987, all EU environmental protection legislation was introduced via the general language of one or both of two Treaty articles that only implicitly recognized EU authority over environmental issues in Member States. Article 100 of the Treaty calls for the harmonization of laws affecting the common market in Member States. Article 235 authorizes measures that “prove necessary to attain one of the objectives of the Community” absent a specific delegation of authority by the Treaty. Although the Articles make no explicit reference to environmental issues, they have been used as authority for certain environmental regulations. For example, Article 100’s allusion to issues affecting the common market was used as the authority to develop legislation that regulated product and industry standards across the EU.

On the heels of the increased environmental awareness that swept the globe in the late 1960s, the European Community initiated the European Community Action Programmes on the Environment. The first of these five-year programmes, covering the years from 1973 to 1977, established principles and priorities for future environmental policies. The second five-year programme (1977-1981) established a list of eleven principles and actions to be taken in order to move closer to the goal of environmental protection. The list included the decision-making tool of environmental impact assessment. The first two Action Programmes had a common theme of protecting human health and the environment by controlling pollution problems. The third five-year Programme (1982-1986) solidly shifted the emphasis of environmental policy from one of pollution control to one of prevention and integration of environmental issues into other European Community policies. Not surprisingly, it was during the era of the Second and Third Action Programmes when Directive 85/337/EEC, an inherently preventative and integrating piece of legislation, was first proposed and then accepted. The Fourth Action Programme (1987-1992) continued the trend of prevention but proceeded further beyond its predecessors by stressing the importance of using stringent environmental standards in regulating the activities of Member States.

The evolution of environmental policy in the EU took a crucial step on July 1, 1987 when, in conjunction with the adoption of the Fourth Action Programme, the Community adopted the Single European Act. The Act, which consisted of amendments to the Treaty of Rome, contained articles that specifically affected environmental policy. Article 100A recognized the relationship between promotion of the common market and protection of the environment by authorizing the EU to adopt environmental legislation on the basis that such issues affect the marketplace. Article 130R lays out the objectives of future Community action relating to the environment by formalizing the principles of prevention, subsidiarity, “polluter pays,” and most importantly, integration. Article 130T reconfirms that individual Member States may enact environmental legislation that is more stringent than, but is compatible with, that of the Community.

The evolution of environmental policy in the EU from the 1957 Treaty of Rome through the various Action Programmes and to the Single European Act exemplifies the European Community’s commitment to a preventative approach to environmental protection. EIA law stands as a hallmark of that preventative approach. The EU’s commitment to the comprehensive prevention of environmental degradation is tested, however, by the limitations of its own EIA law.

2.1.2. Environmental Impact Assessment Law: A Description and Comparative Study EIA: A General Overview

The “essential structure” of EIA law is common to all the nations that use it. Generally, EIA law is a process intended to minimize or prevent environmental damage that is usually associated with the construction and operation of certain development projects. Usually in the form of legislation, regulations and/or administrative processes, EIA law requires that certain development projects, while still in a planning stage, be analyzed in terms of their potential adverse impacts on the environment. Developers and/or governmental bodies, depending on the particularities of the EIA law in question, must conduct an analysis, or assessment, of the environmental effects of certain projects. The public authority responsible for granting or denying consent to the project is asked to take into account the results of the assessment. Again, depending on the particularities of the EIA law in question, provisions are made for public disclosure of the assessments, as well as for public involvement in the authority’s decision-making process.

The EIA process plays four important roles in protecting the environment. First, EIA law gives concrete, practical effect to environmental policy language that is often broad, general and otherwise absent of specific mandates. The U.S. Congress, in formulating its declarations of environmental policy, included EIA so as to “insure that the policies enunciated . . . are implemented.” EIA helps to insure proper implementation of policies by requiring the formulation and submission of written assessment reports, demonstrating an affirmative compliance with the environmental concerns outlined in policy language. A second role for EIA is to provide an analytical decision-making tool that “institutionalizes foresight.” It asks the decision-making authority to look beyond the moment and to incorporate into its decision the possible irreversible future effects a project may have on the environment. Third, to the extent that EIA affirmatively asks developers and decision-makers to account for the social and economic costs resulting from their actions, EIA forces the internalization of those costs and consequences that might otherwise go unaccounted for. The final role that EIA plays is as a public-awareness measure. Most EIA processes allow for public disclosure of development plans, as well as for public participation in the decision-making process. In the words of Professor Nicholas Robinson, “EIA facilitates democratic decision making and consensus building regarding new development.”

For EIA to incorporate environmental norms into decision making, it must address both environmental ethics and values and human ethics, values, perceptions, beliefs and attitudes. It is an objective procedure for identifying, measuring and predicting environmental attributes and changes brought about by existing or proposed actions, but is subjective in the interpretation, aggregation and management of those changes. Although driven by an environmental ethic, the links between EIA and ethical theory in general and environmental ethics in particular, have been tenuous at best. The tendency has been to assume that concepts and methods developed to predict and explain environmental change provide a sufficient knowledge base.

The practice of EIA involves, usually implicit assumptions regarding the known environment, environmental impacts and environmental norms. It is, for example, generally assumed that aspects of the environment and their inter- relationships can be identified, described or measured and monitored; changes, with or without a proposed action can be predicted to the extent that cause-effect relationships can be established; stakeholders’ values can be determined; measures of impact magnitude and importance can be combined; individual and cumulative environmental consequences can be interpreted, aggregated and managed; end issues of probability of uncertainty can be managed sufficiently to decide whether a proposed action should proceed and, if so, then, in what fashion. These knowledge assumptions are questionable, especially in the subjective realm of conflicting values, perceptions and human behaviour.

The primary focus of EIA was initially on the physical and natural environment and, to a lesser extent, on the socio economic consequences of physical and natural environmental changes. The “environmental” aspect of EIA now generally embraces both natural (physical, biological and ecological) and human (human health and well being, social, cultural, economic built) environmental components and systems (Wiesner, 1995) and their inter relationships (Jain, Urban and Stacey, 1977; Estrin and Swaigen, 1978; CEARC, 1988b). There are many opinions regarding whether social impact assessment (SIA) or socio-economic impact assessment is or should be a sub-field of EIA (Morris and Therive1, 1995).

A broad definition of the environmental EIA facilitates a more comprehensive approach to environmental management but it leaves open the possibility that certain elements of the environment will not receive pertinent attention. The question of how best to integrate social, ecological and economic data and perspectives remains unresolved. Human actions alter the environment (Jain, Urban and Stacey 1977; Mitchell and Turkheim 1977). In EIA, the term “impact” generally refers to the accepted environmental consequences (Meredith 1991) of a proposed action or set of actions (Rau and Wooten 1980) and less frequently to the actual consequences of an existing activity. Distinctions also are often drawn between changes or effects (measures of magnitude) and impacts (measures of magnitude in combination with measures of importance), between alternations of environmental conditions or the creation of a new set of environmental conditions, and between environmental conditions changes caused or induced by actions (Rau and Wooten 1980).

Although the traditional focus of EIA has been capital projects, EIA requirements are increasingly applied to legislative proposals, policies, programs, technologies, regulations and operational procedures (Munn 1979; Estrin and Swaigen 1978; CEARC 1988b). The expectation that the conceptual basis for EIA largely developed at a project level can be readily extended and applied to policies, programs and technologies is questionable. At the policy and program level the range of inter related choices tends to multiply, impacts tend to be more generic and less amenable to precise prediction and EIA overlaps with policy and program evaluation, planning and environmental and resource management.

A distinction is sometimes drawn between project level EIA und the strategic environmental assessment (SEA) of policies, plans and programs (Sadler 1995). Risk assessment, technology assessment and environmental health impact assessment are viewed as either subfields within EIA (Sadler 1995) or as distinct fields that partially overlap with EIA in most cases EIA applies to the actions of both public and private proponents (Meredith 1991; Mitchell and Tuclcheh 1977). Alternative methods of achieving a proposed end and of managing the impacts associated with a partial choice are also usually considered in an EIA planning process. A Comparative Study: The United States’ Experience with EIA The significant history of EIA law began with the passage in the United States of the National Environmental Policy Act (NEPA) of 1969. NEPA was brought about as an instrument of policy and planning (Roberts, 1984a). Among NEPA’s eloquent but broad declarations of environmental policy is a brief section mandating EIA law for certain projects, thus providing a set of teeth with which to enforce the statute’s policies. Section 102(2) of the Act requires all federal agencies to prepare and include an environmental impact statement (EIS) with every recommendation or proposal for “major Federal actions significantly affecting the quality of the human environment.” The importance and weight of this requirement, as well as the problems inherent in defining its triggering terms, are demonstrated by the fact that the EIS clause has spawned nearly all case law brought under NEPA. Much of NEPA case law has dealt with the issue of whether projects involving national defense and national security are subject to compliance with Section 102(2), and judicial review of such compliance. The environmental, public-awareness and military interests at stake in these cases are reflected by two questions. First, will compliance and judicial review compromise the confidentiality of matters regarding national security? Second, will compliance and judicial review compromise the ability of the military to proceed with projects, which while detrimental to the environment, are crucial to the defense of the country? In answering these questions, it is important to note that NEPA calls for EISs from “all agencies of the Federal Government;” the statute does not provide a textual exception for national defense or security projects. Despite the clear language of the statute, however, U.S. courts have struggled with the issue and are currently responding in a manner that runs counter to the language and true intent of NEPA.

Most court decisions find that NEPA-based claims against projects involving national defense interests are justifiable. Early cases, however, were ambiguous in answering questions of whether such projects must comply with NEPA requirements and whether EISs for such projects are subject to judicial review of their legal sufficiency. For instance, in the early case of McQueary v. Laird, the Tenth Circuit Court of Appeals dealt with a NEPA challenge to a military project by claiming lack of jurisdiction. In another early case, Citizens for Reid State Park v. Laird, the U.S. District Court for the Southern District of Maine found that NEPA applies to all federal agencies, including the Department of Defense. The Court in Citizens for Reid State Park refused to require an EIS for the Navy project in question, however, because it found that the plaintiff citizens group had failed to prove that the Navy plans constituted a major project significantly affecting the environment. Later court decisions often allowed national defense projects to proceed without an EIS or judicial review of an EIS, not because the courts believed that such projects did not have to comply with NEPA, but merely because the courts found that “major” federal action or “significant” effects on the environment–requirements necessary to trigger NEPA –were absent.

In cases where major federal actions having significant effects on the environment were found to exist, compliance with NEPA was required despite national security interests. In Committee for Nuclear Responsibility, Inc. v. Schlesinger, for example, the Supreme Court refused to issue an injunction for violation of NEPA, but the Court’s rushed decision upheld a Court of Appeal’s finding that the Atomic Energy Commission did have a “judicially reviewable duty to comply with NEPA requirements in spite of national security considerations.” In Progressive Animal Welfare Society v. Department of Navy, the Western District Court of Appeals of Washington found that the Navy’s plan to use dolphins in a military project was a major federal action with significant environmental impact; accordingly, a NEPA EIS was required for the project. Finally, in Concerned about Trident v. Rumsfeld, the Court of Appeals for the District of Columbia found that the Navy’s plans for a submarine support facility required compliance with NEPA “to the fullest extent possible.” The court found that the Navy’s own internal environmental impact statement was insufficient to fulfill the requirements of NEPA. In making its decision, the court, citing judicial precedent as well as NEPA’s lack of a textual military exception, rejected the Navy’s argument that NEPA could “‘not possibly apply’ to strategic military decisions.” The court stated that the Navy’s plans were subject to NEPA requirements despite the project’s “serious national security implications.” In 1981, the Supreme Court again addressed the issue of the military’s compliance with NEPA’s EIA mandate. In Weinberger v. Catholic Action of Hawaii, the Court refused judicial review of the Department of Defense’s compliance with NEPA in a matter of national security. The dispute began with the Navy’s plan to construct a weapons and ammunition holding facility capable of storing nuclear weapons in Ohau, Hawaii. The Navy’s internal assessment concluded that the facility would not have significant impact on the environment and as such, a NEPA EIS was unnecessary. The Navy’s assessment, however, failed to include an analysis of the facility’s impact on the environment should nuclear weapons actually be stored at the site. The district court that first reviewed the case found that the Navy had complied with NEPA to the fullest extent possible.

The Ninth Circuit Court of Appeals reversed the decision of the district court, arguing that an EIS was necessary and feasible since it would not necessarily release confidential matters. Important to the court was the fact that the Navy had already made the nuclear capabilities of the facility public knowledge. The court went on to suggest a “hypothetical” approach to writing EISs that would protect national security, environmental concerns, and public disclosure interests. Judge Merrill wrote that under this hypothetical approach, the Navy’s EIS must evaluate the hypothetical consequences of storing nuclear weapons at the site but it need not imply that a decision to actually store nuclear weapons had been made. The court argued that since the public was already aware of the capability of the facility to store nuclear weapons, a hypothetical EIS that discussed the impact of such storage, but not whether it would actually occur, would not reveal anything the public did not already know. Further, it would allow the Navy and the decision-making authority to consider the true and potential costs and consequences of proceeding with the project. Finally, the Court stated that a hypothetical EIS would assure the public that the decision-making process had fully accounted for the project’s externalities and consequences.

On review, the Supreme Court reversed the Court of Appeals’ creative approach to balancing the interests at stake. The Court, discrediting the Ninth Circuit’s notion of a hypothetical EIS, refused to mandate a NEPA EIS because it believed that doing so would reveal confidential matters of national security. In the majority opinion, Justice Rehnquist outlined the current status of the law regarding military compliance with EIA law in the United States. He wrote that public policies favoring the protection of confidential information regarding national security ultimately forbids judicial scrutiny of “whether or not the Navy has complied with NEPA ‘to the fullest extent possible.'” Justice Blackmun, who concurred with the judgment of the Court, was joined by Justice Brennan in stressing that although the Defense Department may disseminate EISs in a manner that protects confidential matters, it is still bound by the obligations of NEPA. A Comparative Study: The European Union’s Experience with EIA

Sixteen years after NEPA took effect in the United States and after five years of consideration in the European Union, Environmental Impact Assessment law was officially incorporated into the statutory framework of the EU on June 27, 1985. Directive 85/337 mandates EIA for certain projects such as those involving crude-oil refineries, thermal and nuclear power stations, motorway construction and dangerous waste landfills. It also requires EIA to be performed in conjunction with those other projects that Member States find have a significant effect on the environment due to the projects’ particular characteristics. The specific legal authority for the Directive is derived from Articles 100 and 235 of the EEC Treaty. The Directive also cites to the first three Action Programmes for their policies of preventing environmental harms at the source rather then trying to counteract environmental degradation once it occurs.

The procedure called for by the Directive identifies, describes and analyzes the effects a development project may have on humans, fauna, flora, soil, water, air, climate, landscape, welfare and cultural heritage. The EIA must contain a description of the project in question, an outline of the main alternatives to the project, the reason for choosing the proposed plans, a description of the significant effects the project will have on the environment, and a description of the measures that must be taken to avoid, reduce or compensate for those effects. Because developers have the best knowledge of the nature of their proposal, they have the responsibility of gathering the information and compiling the EIA. The decision-making authorities who have the power of giving consent to the developer’s plans have the responsibility of setting standards for approval or disapproval and ensuring that the developers’ EIA complies with the law. Further, they are obligated, by statute, to incorporate the EIA into their decision-making process. Also, Article 10 of the Directive states that the authorities must respect existing regulations and practices regarding industrial and commercial secrecy. Finally, the Directive envisions an active role for the public. In addition to supplying the decision-makers with information regarding the impact a project will have on the local environment, the public may have an opportunity to suggest alternatives and to pursue judicial action in order to request a review of consent. Further particularities of public participation and involvement are to be determined by the individual Member States. The “National Defense Project” Exception to Directive 85/337/EEC

The effectiveness of the Directive in preventing environmental harms is undercut by the exception it gives to national defense projects. It is reasonable to infer that this exception reflects two assumptions. The first assumption, explicitly mentioned in the Directive, is that national legislative processes will ensure that defense projects comply with the Directive. No rationale is provided for this assumption except for the implied reasoning that national legislators share the concerns of the Directive and are able to guide national legislation accordingly. The second assumption appears to be that the confidentiality of Member States’ national security matters would be compro

Feasibility Study of Solar Energy in India


Solar energy in its raw form may be pollution-free, but manufacturing the devices that get the energy out of light and heat requires metal and other material, requiring mines and smelters, therein causing pollution. Maybe the most exciting thing about solar energy today is not only that the costs continue to drop and efficiencies continue to rise, but that clean solar energy is arriving at last. New technologies allow new methods of manufacturing which pollute much less and often run on solar energy. Solar heating and solar electric systems can now generate thermal and electric energy over their service life up to 100 times the energy input during their manufacture. This ratio; the energy it will produce in its lifetime, compared to the amount of energy input to manufacture and maintain an energy system, has doubled in the last 20 years for most solar technologies. The ratio of energy out vs. energy in for solar systems has become so favorable that the economic and ecological viability of solar power is now beyond question. One reason solar energy still cannot compete financially vs. conventional energy is because the value of future energy output from a photovoltaic system is discounted when calculating, for example, an internal rate of return. These economic models that put a time-value on money, making long-term receipts not worth as much as near-term receipts cannot necessarily be applied to energy. In fact, endues pricing will significantly increase customer penetration, and this will have a correspondingly positive impact on the economics of Solar Water Heating as a stand-alone profit-making business. The business views solar energy as a potential key resource to help India’s energy portfolio become greener, more diversified and more secure, while also creating jobs in the State. Solar energy can play an important role in allowing India to reach its Renewable Portfolio Standard (“RPS”) goals. As stated by the Commission, “the development of additional renewable energy resources is a long-standing energy policy objective of the State. “The Indian solar energy industry can easily rise to the challenge of bringing solar energy to the forefront to help India address the twin challenges of energy security and combating global warming and climate change.”India is particularly well positioned to reap the advantages of solar power, which is clean, free, forever and everywhere.”

Chapter 1: Introduction

India is both densely populated and has high solar insolation, providing an ideal combination for solar power in India. Much of the country does not have an electrical grid, so one of the first applications of solar power has been for water pumping; to begin replacing India’s four to five million diesel powered water pumps, each consuming about 3.5kilowatts, and off-grid lighting. Some large projects have been proposed, and a 35,000km² area of the Thar Desert has been set aside for solar power projects, sufficient to generate 700 to 2,100gigawatts. In July 2009, India unveiled a $19 billion plan to produce 20 GW of solar power by 2020. Under the plan, solar-powered equipment and applications would be mandatory in all government buildings including hospitals and hotels. 18 November 2009, it was reported that India is ready to launch its Solar Mission under the National Action Plan on Climate Change, with plans to generate 1,000 MW of power by 2013. Of the total energy produced in India, just 0.5% is solar. But with the Government of India’s (GOI) target to increase the use of renewable energy to 10% of total power generation by 2012, solar panels are set to become a more regular feature in communities across India. The GOI has been pushing solar power to households in town and cities using incentives such as discounts on energy bills if solar is installed. However, for the hundreds of thousands of people that live in rural areas of the country, solar energy is more difficult to access. It may seem surprising that solar energy as applied to heating domestic hot water – an idea that has been around for a long time – offers what utilities and their residential customers want most in a new product/service. This document not only explains how and why, it shows how to get into the business and succeed on a commercial scale. Solar is also easier to sell using end-use pricing because it eliminates customer issues of high first cost and perceived risk that have been major weaknesses in how solar has been marketed in the past.

India’s Emerging Solar Industry:

The global solar energy industry is in the early phases of what may be a 30 to 50-year expansion. By the end of 2007, the cumulative installed capacity of solar photovoltaic (PV) systems around the world had reached more than 9,200 MW, up from 1,200 MW at the end of 2000. Installations of PV cells and modules around the world have been growing at an average annual rate of more than 35% since 1998 (Solar Generation V Report, EPIA, and September, 2008). While contributing only a fraction of the world’ energy needs today, by 2060 it may be the largest single contributor to global energy production. The European Photovoltaic Industry Association (EPIA) estimates that by the year 2030, PV systems could be generating approximately 2,600 TWh of electricity around the world, enough to satisfy the electricity needs of almost 14% of the world’s population. India has the opportunity to play a major role in this global energy transformation. With significant technical and production resources, India can be a major supplier of PV cells and modules to meet the growing world demand. With the current pace of growth, India’s solar industry could emerge as the fourth largest generator of solar energy in the world after, Germany, China, and Japan. As an increasingly significant energy consumer, solar power can play a significant role in the country’s domestic energy supply. With over 50,000 villages in India without electricity, solar power has enormous potential to meet rural electrical needs, improving the lives of millions of Indians and meeting critical agricultural, education and industrial needs.

Current Situation in India:

India is already a major contributor to the global technology market. According to ISA/ Frost & Sullivan report, semiconductor and embedded design revenues are expected to grow from $3.2 billion in 2005 to $43 billion by 201 5. The India semiconductor market is expected to grow from $2.82 billion in 2005 to $ 36.3 billion in 201 5. Electronics manufacturing is estimated to reach $1 55 billion in 201 5, creating a $1 5.5 billion semiconductor market opportunity. With recent government and industry actions, India can also be expected to join the leaders in the global photovoltaic market. India will pool all their scientific, technical and managerial talents, with financial sources, to develop solar energy as a source of abundant energy to power their economy and to transform the lives of their people. Their success in this endeavor will change the face of India.” To accomplish these goals, the India government has instituted programs on both the demand and supply side for solar industry. On the supply side, ‘ast year the India cabinet approved incentives to attract foreign investment to the semiconductor sector, including manufacturers of semiconductors, displays and solar technologies. The government announced it will bear 20 per cent of capital expenditures in the first 10 years if a unit is located within Special Economic Zones (SEZs), including major economic zone in Hyderabad called “Fab City”. The minimum investment was set at 25 billion rupees (—$500 million) for semiconductor manufacturers and 10 billion rupees for other micro- and nanotechnology makers. With theses recent announcements, the solar industry has been the chief beneficiary of this incentive-based economic policy. In August, as a follow up to its semiconductor policy (the Special Incentive Package Scheme, or SIPS), the government of India received 12 proposals amounting to a total investment of Rs. 92,915.38 crore. 10 of these proposals were for solar PV, from: KSurya (Rs. 3,211 crore), Lanco Solar (Rs. 12,938 crore), PV Technologies India (Rs. 6,000 crore), Phoenix Solar India (Rs.1, 200 crore), Reliance Industries (Rs.11, 631 crore) Signet Solar (Rs. 9,672 crore), Solar Semiconductor (Rs.11, 821 crore), TF Solar Power (Rs. 2,348 crore), Tata BP Solar India (Rs. 1,692.80 crore), and Titan Energy System (Rs. 5,880.58 crore).

In late September, there were three further announcements, concerning: Vavasi Telegence, which plans to invest Rs. 39,000 crore for a solar PV and polysilicon unit; EPV Solar, which will invest Rs. 4,000 crore for a solar PV unit; and Lanco Solar, which will invest Rs I 2, 938 crore for a solar PV and polysilicon unit. In 2009, approximately I 30MW of shipments in 2009 are projected, compared with approximately 30MW in 2008. On the demand side, India has a long term goal of generating I 0% of the country’s electricity from renewable sources by 2032. In early 2008 India instituted a feed-in tariff for solar PV and/or thermal electricity generation (i.e. —$0.30!kWhr for up to 75% of solar PV output) at the national level as a supplement to more modest local incentive programs. The feed-in tariff is subject to annual digressions and is slated to be in force for ten years. Regional caps will limit total installations in a given year, but should drive solid percentage growth in 2008, with accelerating growth through 201 0. The new incentive scheme for solar power plants in January 2008 could further enable rapid market growth in the coming years. For power producers, a generation-based subsidy is available up to Rs. I 2/kWh from the Ministry of New and Renewable Energy, in addition to the price paid by a state utility for I 0 years. With state utilities mandated to buy energy from solar power plants, several state electricity regulatory boards are setting up preferential tariff structures. Among the states that already have proposals in place are Rajasthan (Rs. I 5.6 per kWhr proposed), West Bengal (Rs. I 2.5 per kWhr proposed), Punjab (Rs. 8.93 per kWhr), with several other states exploring such a possibility. Aside from the feed-in tariffs, the Indian Renewable Energy Development Agency (IREDA) provides revolving fund to financing and leasing companies offering affordable credit for the purchase of solar PV systems in India. Additional incentives include, 80% accelerated depreciation, lower import duties on raw materials, and excise duty exemption on certain devices.

The role of SEMI PV Group:

SEMI is the global industry association serving the manufacturing supply chains for the microelectronic, display and photovoltaic industries. Since its inception in 1970, SEMI has been helping members explore and develop new markets for their products and services. SEMI has helped facilitate the creation of new manufacturing regions by providing advice and council, facilitating collaborations, organizing trade missions and trade events, and other activities necessary to integrate market forces, governmental economic policy, education and human capital programs, and financial support. As the semiconductor industry expanded globally and new manufacturing centers were established throughout the world, SEMI successively opened offices in Japan, Europe, Korea, Taiwan, Singapore and China to support introduction to these vital new market regions. In each of these regions, SEMI has organized SEMICON expositions, to bring buyers, suppliers and other industry constituents together, and facilitate industry growth.

The SEMI PV Group was established in January 2008 to enhance support to members serving the crystalline and thin film photovoltaic (PV) supply chains. Members of the PV Group provide the essential equipment, materials and services necessary to produce clean, renewable energy from photovoltaic technologies. The PV Group is committed to lowering costs for PV energy and for expanding the growth and profitability of SEMI members serving this essential industry. With the input and guidance of the SEMI

Board of Directors and Global and Regional PV Advisory Committees in North America, Asia and Europe, the PV Group has prepared a White Paper, “The Perfect Industry– The Race to Excellence in PV Manufacturing,” that describes the ideal industry characteristics for the high-growth PV industry and describes both current and potential SEMI policies, program and initiatives designed to achieve them. By defining and communicating ideal or perfect industry end-states, equipment and materials suppliers along with cell and module manufacturers can more effectively prioritize industry-wide initiatives. The White Paper outlines four attributes of the perfect industry: long term growth; sustained profitability; environmental excellence, and global scope. Each of these attributes is examined to explain and understand their role in the industry’s formation, and to help understand and describe the necessary industry actions required to achieve the greatest impact. The SEMI PV Group beUeves that hepng grow and facilitate the global market for PV is essential to its mission and that India will play a vital role. Following a path that proved successful in the semiconductor and display industries, the SEMI PV Group believes that for the industry to achieve long-term growth, open markets and a global supply chain supported by global standards will be required. A sustainable industry committed to long term, profitable growth industry will also be one with harmonized standards for environmental, health and safety standards and guidelines that yield high-quality, low- cost products from any manufacturing location in the world. Unlike semiconductors— and virtually any other industrial segment– the importance of PV industry goes beyond the economic well-being of its participants. The production of clean, renewable energy is of vital importance to every human being on the planet.

Renewable Energy sector in India:

India has the world’s largest programme for renewable energy. Government created the Department of Non-conventional Energy Sources (DNES) in 1982. In 1992 a full fledged Ministry of Non-conventional Energy Sources was established under the overall charge of the Prime Minister. India is blessed with an abundance of sunlight, water and biomass. Vigorous efforts during the past two decades are now bearing fruit as people in all walks of life are more aware of the benefits of renewable energy, especially decentralized energy where required in villages and in urban or semi-urban centers.

The range of its activities cover:

  1. Production of biogas units, solar thermal devices, solar photovoltaics, cookstoves, wind energy and small hydropower units.
  2. Create an environment conducive to promote renewable energy technologies,
  3. Promotion of renewable energy technologies,
  4. Create an environment conducive for their commercialization,
  5. Renewable energy resource assessment,
  6. Research and development,
  7. Demonstration,
  8. Extension,

Solar Energy:

Solar water heaters have proved the most popular so far and solar photovoltaic for decentralized power supply are fast becoming popular in rural and remote areas. More than 700000 PV systems generating 44 MW have been installed all over India. Under the water pumping programme more than 3000 systems have been installed so far and the market for solar lighting and solar pumping is far from saturated. Solar drying is one area which offers very good prospects in food, agricultural and chemical products drying applications.

SPV Systems:

More than 700000 PV systems of capacity over 44MW for different applications are installed all over India. The market segment and usage is mainly for home lighting, street lighting, solar lanterns and water pumping for irrigation. Over 17 grid interactive solar photovoltaic generating more than 1400 KW are in operation in 8 states of India. As the demand for power grows exponentially and conventional fuel based power generating capacity grows arithmetically, SPV based power generation can be a source to meet the expected shortfall. Especially in rural, far-flung where the likelihood of conventional electric lines is remote, SPV power generation is the best alternative.

Wind Power:

India now ranks as a “wind superpower” with an installed wind power capacity of 1167 MW and about 5 billion units of electricity have been fed to the national grid so far. In progress are wind resource assessment programme, wind monitoring, wind mapping, covering 800 stations in 24 states with 193 wind monitoring stations in operations. Altogether 13 states of India have a net potential of about 45000 MW.

Solar Cookers:

Government has been promoting box type solar cookers with subsidies since a long time in the hope of saving fuel and meeting the needs of the rural and urban populace. There are community cookers and large parabolic reflector based systems in operation in some places but solar cookers, as a whole, have not found the widespread acceptance and popularity as hoped for. A lot of educating and pushing will have to be put in before solar cookers are made an indispensable part of each household (at least in rural and semi-urban areas). Solar cookers using parabolic reflectors or multiple mirrors which result in faster cooking of food would be more welcome than the single reflector box design is what some observers and users of the box cookers feel.

Solar Water Heaters:

A conservative estimate of solar water heating systems installed in the country is estimated at over 475000 sq. mtrs of the conventional flat plate collectors. Noticeable beneficiaries of the programme of installation of solar water heaters so far have been cooperative dairies, guest houses, hotels, charitable institutions, chemical and process units, hostels, hospitals, textile mills, process houses and individuals. In fact in India solar water heaters are the most popular of all renewable energy devices.

Solar Heating and Cooling:

Most solar water heater research is currently focused on reducing costs rather than increasing efficiency. Current work involves replacing standard parts with less expensive polymers. Examples include polymer absorbers with selective coatings, UV resistant polymer glazing, and polymer heat exchangers. The main types are glazed and unglazed flat plate types and the evacuated tube types with about 100 million units deployed worldwide with evacuated tubes making up about 25% of the market. Asian growth is predicted to continue.

Forms of Renewable Energy: Solar

Each day more energy reaches the earth from the sun than would be consumed by the globe in 27 years. Solar energy is renewable as long as the sun keeps burning the massive amount of hydrogen it has in its core. Even with the sun expending 700 billion tons of hydrogen every second, it is expected to keep burning for another 4.5 billion years. Solar energy comes from processes called solar heating, solar water heating, photovoltaic energy and solar thermal electric power.

Solar Heating – An example of solar heating is the heat that gets trapped inside a closed car on a sunny day. Today, more than 200,000 houses in the United States have been designed to use features that take advantage of the sun’s energy. These homes use passive solar designs, which do not normally require pumps, fans and other mechanical equipment to store and distribute the sun’s energy; in contrast to the active solar designs which need the support of mechanical components. A passive solar home or building naturally collects the sun’s heat through large south facing windows, which are just one aspect of passive design. Once the heat is inside, it is captured and needs to be absorbed. A “sun spot” on the floor of a house on a cold day holds the sun’s heat and is perhaps, the simplest form of an absorber. In solar buildings, ‘sunspaces’ are built onto the southern side of the structure, which act as large absorbers. The floors of these ‘sunspaces’ are usually made of tiles or bricks that release air. Passive solar homes need to be designed to let the heat in during cold months and keep the sun out in the hot months. Using deciduous trees or bushes in front of the south-facing windows can do this. These plants lose their leaves in the winter and allow most of the sun in, while in summer, the leaves will block out a lot of the sunshine and heat.

Solar Water Heating – The sun can also heat water for bathing and laundry. Most solar water-heating systems have two main parts: the solar collector and the storage tank. The collector heats the water, which then flows to the storage tank. The storage tank can be just a modified water heater, but ideally, it should be a large well-insulated tank. The water stays in the storage tank until it is needed for something, say a shower or to run the dishwasher. Like solar-designed buildings, solar water-heating systems can be either active or passive. While a solar waterheating system can work well, it cannot heat water when the sun is not shining and for this reason, homes have conventional backup systems that use fossil fuels.

Photovoltaic Energy – The sun’s energy can also be made directly into electricity using photovoltaic (PV) cells, sometimes called ‘solar cells’. PV cells make electricity without noise or pollution. They are used in calculators and watches. They also provide power to satellites, electric lights and small electrical appliances such as radios. PV cells are now even being used to provide electricity for homes, villages and businesses. Usually, PV systems are used for water pumping, highway lighting, weather stations and other electrical systems located away from power lines. As PV systems can be expensive, they are not used in areas that have electricity nearby. However, for those who need electricity in remote places, this system is economical. However, PV power is “intermittent”, that is, the system cannot make electricity if the sun is not shining. These systems therefore need batteries to store the electricity.

Concentrating Solar Power – Solar thermal systems can also change sunlight into electricity by concentrating the sun’s rays towards a set of mirrors. This heat is then used to boil water to make steam. This steam rotates a turbine that is attached to the generator that produces electricity. Solar thermal power, however, is intermittent. To avoid this problem, natural gas is used to heat the water. Solar thermal systems should ideally be located in areas that receive a lot of sunshine all through the year.

Global Warming and Climate Change:

The past few decades have seen a host of treaties, conventions, and protocols in the field of environmental protection. The Indian scientist had predicted that human activities would interfere with the way the sun interacts with the earth, resulting in global warming and climate change. His prediction was borne out and climate change is disrupting global environmental stability. Land degradation, air and water pollution, sea-level rise, and loss of biodiversity are only a few examples of the now familiar issue of environmental degradation due to climate change. One of the most important characteristics of this environmental degradation is that it affects all mankind on a global scale – without regard to any particular country, race, or region. This makes the whole world a stakeholder and raises issues on how resources can be allocated and responsibilities be shared to combat environmental degradation. One of the main human activities that releases huge amounts of carbon dioxide into the atmosphere is the conventional use of fossil fuels to produce energy. Scientists and environmentalists have studied, over the past few years, the impact of conventional energy systems on the global environment. The enhanced greenhouse effect from the use of fossil fuels has resulted in the phenomena of acid rain and accentuated the problem of ozone depletion and global warming, resulting in climate change. Due to the increased use of technology and mechanization in human activities, the delicate ecological and environmental balances are being disturbed. For instance, carbon dioxide is being pumped into the atmosphere faster than the oceans and flora can remove it and the rate of extinction of animal and plant species far exceeds the rate of their evolution. The reason that global warming and climate change are considered serious global threats is that they have very damaging and disastrous consequences. These are in the form of:

  • Increased frequency and intensity of storms, hurricanes, floods and droughts;
  • Permanent flooding of vast areas of heavily populated lands and the creation of hundreds of millions of environmental refugees due to the melting glaciers and polar ice that causes rising sea levels;
  • Increased frequency of forest fires;
  • Increased sea temperatures causing coral bleaching and the destruction of coral reefs around the world;
  • Eradication of entire ecosystems

The Intergovernmental Panel on Climate Change (IPCC) was set up by the United Nations Environment Program (UNEP) and the World Meteorological Organization (WMO) in 1988 to assess scientific, technical, and socioeconomic information needed for the understanding of the risk of human induced climate change. According to the IPCC assessments, if the present rate of emissions continues, the global mean temperature will increase by 1°Celsius to 3.5°Celsius compared to 1990 levels by the year 2100. The best estimate is at 2°Celsius. Moreover, the impacts of global warming and climate change could become a source of increased tension between nations and regions. For instance, in many countries, a severe disruption of the world’s food supplies through floods, droughts, crop failures and diseases brought about by climate change would trigger famine, wars and civil disorder. Historically, it is the developed world that is responsible for most of the emissions into the atmosphere. However, it is the underdeveloped parts of the world that will suffer its worst effects. For example, as sea levels rise, a country like Bangladesh will suffer much more from the loss of valuable arable and populated lands than North American or European countries, even though, in comparison to the latter, the former would have much less emissions.

Chapter 2: Literature Review


Solar energy industry is at an inflection point with developments in technology driving down costs as fossil fuel prices head northwards. In this changing environment, those who will proactively seize opportunities through innovative business models across the solar energy value chain will emerge as winners. The threat to energy security is greater than ever perceived before. With the sub-prime crisis hitting the US and global economies and the dollar depreciating against all major currencies, crude oil prices have crossed the US$140/barrel mark on sustained demand and supply concerns. Not just oil, but other important fuels like coal and gas, has also charted the same path. Since 2002, the increase in fuel prices has been incredible: oil and coal have jumped by more than 500% and gas by more than 300%. A classic demand-supply theory may not provide enough justification for this sudden surge and it is becoming increasingly difficult to forecast fuel prices in the long term (EIA forecasts US$70/Bbl for oil and US$6.6/MMBTU for gas by 2030 in its 2008 Annual Energy Outlook report). While fossil fuel prices are sky rocketing, alternate energy sources like solar and wind look more attractive by the day. Solar industry is at the crossroads of technological developments and operational improvements bringing down its costs and of market forces that shape its demand potential.

Solar energy economics:

Solar PV (photovoltaic) and CSP (concentrated solar power) electricity generation currently costs around 15-30 US cents per kWh (depending on geographical location) against grid prices of 5-20 US cents across the world for different users. So far, governments across the world have supported solar power with subsidies and feed-in tariff incentives, which would be done away with in a gradual manner. The delivered cost per unit is a function of three important parameters: solar system capex and its financing cost; solar isolations received by the system; and PV cell efficiency. Solar module cost forms about 60% of the total solar system capex. Solar module costs have dropped significantly from about US$25/W in early 1980s to US$3.5/W now, registering a year-on-year drop of 7%. Constraints in silicon supply have restricted this trend to some extent for the last 2-3 years. If module costs drops below US$2/W, ‘grid parity’ could be achieved. The capacity of silicon production is expected to double in the next 2-3 years as more than US$6-bn would be invested by major firms through 2010. This could lead to a potential oversupplied market, putting pressure on silicon prices. Also economies of scale will lead to cost savings. Cambridge Energy Research Institute reports that the doubling of capacity would reduce production costs by 20%. Cell efficiency is expected to improve from about 15% to 20%, which will further reduce the capex per watt. Thin film and CSP technologies are reducing silicon usage in solar systems. With the combined effect of process improvements and technology developments, the cost of solar module could achieve the threshold limit of US$2/W in the next four to five years, ahead of the 2015 target for solar grid parity power set by India. A leading solar company in India is confident of bringing total solar capex below US$2.5/W. If we consider the cost of carbon emissions from fossil fuels, grid power will become more costly (about 3 US cents/unit additional cost for coal based generation). Sustained high fuel prices, accompanied by carbon emission costs, will further accelerate grid-parity time for solar power. While solar power is approaching grid parity, the solar energy industry is witnessing a changing competitive scenario. Structural changes in the industry are visible, along with shifts across the value chain by companies to capture the future value.

Solar industry’s changing dynamics:

The solar PV industry value chain consists of the following segments:

There are two clear groupings in the value chain:

  • Silicon to module manufacturing group; and
  • Product and system integration.

Silicon manufacturing (solar grade) is close to a US$1bn industry, while the size of the installation industry is about US$6-bn. Silicon module segment is capital intensive and technology driven. It captures most of the value in the solar value chain, as a handful of large companies are present in this segment. The fragmentation increases subsequently across the value chain. Silicon and wafer manufacturing companies enjoy about 40% profit margins, while installers typically work with about 10-15% margins. Recent activities in the solar PV value chain indicate major shifts in the industry structure:

  • Companies aiming to create an integrated presence across the value chain: Sun Power, a US based solar cell and module manufacturer, recently acquired Power light, a system integrator present in US and Europe.
  • Companies developing alternate technology options: Applied Materials, a semiconductor company, acquired Applied Films, a producer of thin film deposition equipment.
  • Module manufacturers tying up the silicon end: Moser Baer, an Indian solar company, recently completed a series of strategic tie-ups in the silicon-cell segment to secure silicon supply and technology access.

On the application side as more and more off-grid solutions are emerging, customer interface management would become crucial. Concentrated solar power (CSP) also holds promise with ability to generate electricity on a large scale (10 to 80

Optimal Extraction Paths of Coal

Chapter 1: Introduction

1.1. Motivation

According to the World Energy Outlook (WEO 2007)[1], global carbon dioxide (CO2) emissions will increase by 1,8 % per year from 2005 to 2030, and 2 % per year for the period 2030-2050.[2] From 12.446 Mt of CO2 equivalent in 2002, emissions will reach 15.833 Mt in 2030 for OECD countries – an average increase of 1,1 % per year. CO2 is the most important anthropogenic greenhouse gas (GHG), which is contributing to global warming. The primary source of the increased atmospheric concentration of CO2 since the pre-industrial period results from fossil fuel use, with land-use change providing another significant but smaller contribution.[3] Continued greenhouse gas emissions at or above current rates would cause further warming and induce many changes in the global climate system during the 21st century.[4]

According to the Nuclear Energy Agency and the International Energy Agency the power generation sector will contribute to almost half the increase in global emissions between 2002 and 2030, and will remain the single biggest CO2-emitting sector in 2030. In OECD countries, its emissions will rise from 4.793 Mt of CO2 in 2002 to 6.191 Mt of CO2 in 2030, but the share will remain constant.[5]

Today, power generation emits 65 % of industrial emissions of CO2 in OECD countries and is likely to become instrumental in countries’ strategies to reduce greenhouse gas emissions.[6] One of such instruments is the Kyoto Protocol.

Under the United Nations Framework Convention on Climate Change (UNFCCC), more than 180 countries have recognised the need to stabilise the concentration of GHG in the atmosphere, which are causing climate change. The Kyoto Protocol to the UNFCCC, was adopted at the third session of the Conference of Parties in 1997 in Kyoto, Japan. It entered into force on 16 February 2005 with 184 Parties of the Convention who have ratified to date.[7]

The major feature of the Kyoto Protocol is that it sets binding targets for 37 industrialized countries (including Germany) and the European Community for reducing GHG emissions. These amount to an average of five percent of the 1990 levels over the five-year period 2008-2012.[8]

The Kyoto Protocol includes specific “flexible mechanisms” such as Emissions Trading, the Clean Development Mechanism (CDM) and Joint Implementation (JI) for the countries to be able to reach their mandatory emission limits.

Emissions trading, as set out in Article 17 of the Kyoto Protocol, allows countries that have emission units to spare – emissions permitted to them but not “used” – to sell this excess capacity to countries that exceed their targets. Thus, a new commodity was created in the form of emission reduction or removal assets. Since CO2 is the principal greenhouse gas, people speak simply of trading in carbon. Carbon is now tracked and traded like any other commodity. This is known as the “carbon market”.[9] In European countries the emissions trading system is the European Union Emissions Trading Scheme (EU ETS), the largest system nowadays.

The CDM, defined in Article 12 of the Protocol, allows a country with an emission reduction or emission limitation commitment under the Kyoto Protocol (Annex B Party) to implement emission reduction projects in developing countries. Such projects can earn saleable certified emission reduction credits, each equivalent to one ton of CO2, which can be counted towards meeting the Kyoto targets.

A CDM project activity might involve, for example, a rural electrification project using solar panels or the installation of more energy-efficient boilers.[10]

The JI mechanism, defined in Article 6 of the Kyoto Protocol, allows a country with an emission reduction or limitation commitment under the Kyoto Protocol (Annex B Party) to earn emission reduction units from an emission-reduction or emission removal project in another Annex B Party, each equivalent to one ton of CO2, which can be counted towards meeting its Kyoto target.

JI offers Parties a flexible and cost-efficient means of fulfilling a part of their Kyoto commitments, while the host Party benefits from foreign investment and technology transfer.[11]

Germany is one of the world’s largest energy consumers and ranks third in total CO2 emissions within the G-7, after the USA and Japan.[12] Annually, Germany produces around 850 millions tons of CO2 equivalent gases, which is approximately 2,8 % of all world’s CO2 emissions.[13] On 31 May 2002, the Kyoto Protocol was ratified by Germany. After entering it into force Germany has played an active role in the European and world carbon markets.

Electricity production in Germany is largely based on burning exhaustible resources, causing high CO2 emissions. That makes the issue of CO2 trade crucial for German power plants and the economy in whole.

In 2008, the total amount of gross electricity supplied in Germany was around 639,1 TWh[14], that is slightly higher in comparison to the previous year. Nevertheless, during last years there is a tendency of increase in electricity supply (See Table 1).

The electricity supply in Germany is based on several technologies and fuels. The distribution of net electricity supply in last years in Germany is shown in Table 1. Electricity production in 2008, as in previous years, was based mainly on coal-fired (hard coal and lignite) steam turbine (43,6 %) and nuclear (22,3 %) power plants.[15]

Energy source

Since the share of the coal based power plants in Germany is large and the amount of electricity produced is still growing, the impact of the CO2 emissions trade on the economy of these plants is very significant.

According to data provided by the Nuclear Energy Agency and the International Energy Agency, the price for coal is rising during the economic lifetime of the coal-firing plants.[16] This rise partly can be caused by additional CO2 costs.

The largest impact of the emissions trading on the electricity generation cost is felt by the lignite-fired power plants followed by the hard coal-fired power plants, since lignite while burning is producing more emissions than hard coal.[17] With an assumed emission price of 20 €/tCO2 the power generation costs of the lignite-fired power plant would increase by 63 % from 25,4 €/MWh to 41,4 €/MWh, whereas the generation costs with hard coal-fired would rise by 48 % from 30,2 €/MWh to 44,8 €/MWh.[18]

The competitiveness of the coal-fired plants is also influenced by including the CO2 prices into the costs. 1 represents marginal cost curve based on the total installed capacity and facilities’ operating costs for Europe.[19] As can be seen, the addition of CO2 price to the production costs can make coal power plants less competitive. The sequence of most of electricity plants stays the same after addition of 20 €/tCO2 to the costs, though coal based power plants move to the side of less competitive plants.

These facts and evident changes raise many questions such as following: how long will electricity from fossil fuels stay competitive, how the extraction of fossil fuels is influenced by CO2 prices.

1.2. Problem definition

From all of the above it can clearly be seen that the CO2 price is influencing the value of coal and its extraction path.[20] Questions this thesis is dealing with are how the extraction path is affected by the CO2 price, and what the optimal path of using coal is. For many companies, i.e. in coal mining and coal utilizing, this question is essential, since they already face significant changes in profitability. The thesis is aimed at describing the optimal extraction path of exhaustible resource (coal) without and then with CO2 considerations. That will allow to compare and to see the changes in paths. Coal-related industries will be discussed here, but similarly the approaches can be used for other exhaustible fossil fuels.

Since coal is an exhaustible resource, for describing its optimal extraction path we will use the exhaustible resource economic theory, to be more precise, Hotelling’s theory, which determines the optimal extraction path of exhaustible resource. Hotelling’s rule is one of the required conditions of optimality of the extraction path. The optimal extraction path means that the miner is maximising his profit if he follows this path.

Besides that, we widen the scope of the work and change the condition of maximising the profit and look at the case when a miner aims to prolong the life-time of the mine as much as possible. We will also consider different markets types: competitive and monopoly. For modelling all the scenarios in the mentioned conditions, a single mine which is situated in Germany will be used, and we will assume that all coal is burned at the power plant for production of electricity which belongs to the same company as the mine.

1.3. Relevance

We aim to determine how the EU ETS is influencing the extraction path of the coal and its value. This question is very important for the mine owner, as it allows him to choose the right strategy for production and exploitation, depending on the new market conditions with costs for CO2. That is essential for the economic survival of the miner. And for us, the task is therefore to determine the influence of CO2 price on the extraction path of a coal mine. First, we will construct the model without consideration of CO2 price in two different market conditions, and afterwards we include CO2 price considerations. As mentioned before, we will discuss the case when a miner wants to maximize the life-time of the mine. The reasons for that might be to save jobs or governmental directives. This case also will be studied in different markets.

1.4. Goals

The goal of the work is to construct simplified models, on the base of Hotelling’s rule theory, which will determine the optimal extraction paths of coal and extraction paths leading to maximization of life-time, for one single mine situated in Germany in different market conditions without and with CO2 price consideration. Afterwards, on the base of models including into them numerical data, we aim to show the scale of the CO2 price affecting the extraction path.

1.5. Structure

The current chapter, chapter one, gives an introduction into the topic, determines the goals of the paper, explains the motivation of the research done in the work, supports it with topical data.

The second chapter contains the theoretical base for the further research. It describes Hotelling’s rule extraction of exhaustible resources, discusses the crucial points of the theory, and gives the basic model of optimal extraction of exhaustible resource.

In the third chapter, models of optimal extraction of coal in different conditions are developed. At the beginning, the models represent the optimal extraction path of competitive market and then monopoly market. Next, cases are discussed in which the company is maximising the life-time of the mine also in two market types. Afterwards, the CO2 price is integrated into the models, and the change in extraction paths is described. At the end, two numerical examples are given, and calculated to find two optimal extraction paths without CO2 and then with it.

The last chapter, chapter four, gives the summary of the whole master thesis and its results.

­­Chapter 2: The theory of exhaustible resources
2.1. Overview

This chapter is dedicated to Hotelling’s theory itself, since we use it to determine extraction paths of coal. It contains the theoretical background for further models construction, and allows to understand the theory deeper. Next, Hotelling’s rule is discussed. Afterwards, we discuss different parameters which can influence the rule, since these considerations are necessary for construction of the models and making appropriate assumptions for them. At the end of this chapter the basic model of optimal extraction of exhaustible resource is given. On the basis of this model, in the following chapter, we will build models with considerations of different market conditions and CO2 price.

The main questions of the economics of exhaustible resources are: what is the optimal rate of exploration of the resource by company, the price path of the exhaustible resource and how does it change through time? These are the questions which we are interested in. And since coal is exhaustible resource, this theory is applicable to our case.

Exhaustible resources are those that are available in fixed quantities. They don’t exhibit significant growth or renewal over the time. Coal is exhaustible resource; its amount in deposits is fixed and doesn’t grow over time. Pindyck distinguishes between exhaustible and non-renewable resources[21] by noting that, while the latter do not exhibit growth or regeneration, new reserves can be acquired through exploratory effort and discovery.[22] Since the first one is more wide spread, in this work the term exhaustible resources will be used for indication of this type of resources.

In 1914 L. C. Gray dealt with questions of natural resource economics. He examined the supply behaviour over time of an individual extractor who anticipates a sequence of real prices and attempts to maximize discounted profits.[23] Harold Hotelling extended Gray’s theory by predicting the sequence of market prices that Gray took as given in his work “The Economics of Exhaustible Resources” in 1931, which then became a seminal paper on the economics of exhaustible resources.[24]

2.1.1. Hotelling’s rule

Hotelling’s rule, as described in his paper entitled “The Economics of Exhaustible Resources”, is an economic theory, pointing out how the prices should behave under a specified (and very restrictive) set of conditions.[25]

It states that competitive mine owners, maximizing the present value of their initial reserves, should extract a quantity such that price of the exhaustible resource rise at the rate of interest.[26] In other words, if we assume that P0 is the initial price of the resource, Pt is the price of resource at some point of time, i is interest rate, then:[27]

(1)Hotelling’s rule is based on the following assumptions:[28]

§ the mine owner’s objective is to maximize the present value of his current and future profits. This requires that extraction takes place along an efficient path in a competitive industry equilibrium, which implies that all mines are identical in terms of costs and that they are all price takers in a perfect and instantaneous market of information.

§ the mine is perfectly competitive and has no control over the price it receives for its production.

§ mine production is not constrained by existing capacity; it may produce as much or as little as it likes at any time during the life of the mine.

§ the ore deposit has a capitalized value. That is, a copper or gold deposit in the ground is a capital asset to its owner (and society) in the same way as any other production facility. Furthermore, he assumed that the richest and most accessible deposits would be mined first, and that increasing scarcity (after exhaustion of the best mines) would confer capitalized value on inferior deposits, which could then be mined.

§ the resource stock is homogenous and consequently there is no uncertainty about the size, grade and tonnage of the ore deposit. Current and future prices and extraction costs are known. This implies that an ore body has uniform quality or grade throughout and that there is no change in grade of the ore as mining proceeds. Miners and grade control officers, who endeavour to supply the mill only with ore above a certain grade, recognize this fifth assumption to be major departure from reality. The topic of uncertain reserves is discussed in more details in section 2.1.5 of the thesis.

§ The sixth assumption is that the costs of mining or extraction do not change as the orebody is depleted. Again, this assumption does not recognize that all mines face increasing costs as the ores are depleted. Underground mining costs increase as the mining face becomes longer and deeper and moves further away from the shaft system, while in open pit operations haul roads become longer and pits become progressively larger and deeper. A rider to Hotelling’s assumption that the marginal unit (standard mining unit) is accessible at the same constant cost, is the assumption that the marginal cost of extraction in this particular case is zero. In addition, it implies that the market price and the rate of extraction are connected by a stable, downward sloping demand curve for the resource.[29] In this constrained model the size of the remaining stock declines without ever being augmented by exploration discoveries. To the topic of cost of extraction is also dedicated the section 2.1.4 of the thesis.

§ The final assumption is that there is no technological improvement during the life of the mine and that no new additions to the resource stock are contributed by exploration. Sections 2.1.7 and 2.1.8. are discussing technological progress and “backstop” resources, which are also connected to technological progress.

Hotelling’s model predicts a general rise in commodity prices over time. The model has been used by numerous authors as a useful reference point in discussions on the various dimensions of mineral supply and availability. Among the factors that the model helps introduce are that:[30]

§ Prices are a useful indicator of scarcity, if markets are functioning well (section 2.1.3 is discussing the question of resource scarcity)

§ The effects of exploration and technological innovation significantly and importantly influence mineral availability over time

§ Market structure matters (competition versus monopoly)

§ Mineral resources are not homogeneous

§ Backstop technologies limit the degree to which prices can increase

§ Substitution is an important response to increased scarcity

§ Changes in demand influence price and availability.

In other words, the model provides a vehicle for introducing the various dimensions of mineral supply and scarcity.[31]

But since Hotelling’s rule uses a number of assumptions, it might not coincide with reality completely. The next part discusses the empirical validation of Hotelling’s rule.

2.1.2. Empirical validation of Hotelling’s rule

All the assumptions of the model mentioned before diminish the potential value of the application of the model for the miner in the real world. In an attempt to validate Hotelling’s rule, much research effort has been directed to empirical testing of that theory. But unfortunately, till now there is no consensus of opinion coming from empirical analysis.[32]

One way of testing Hotelling’s rule seems to be clear: collect time-series data on the price of a resource, and see if the proportionate growth rate of the price is equal to r. This was done by Barnett and Morse. They found that resource prices – including iron, copper, silver and timber – fell over time, which was a most disconcerting result for proponents of the standard theory.[33] Other research came up with absolutely different results which could not assess whether the theory is right or wrong.

But the problem is far more difficult than this to settle, and a direct examination of resource prices is not a reasonable way to proceed. The variable Pt in Hotelling’s rule is the net price (or rent, or royalty) of the resource, not its market price. Roughly speaking, these are related as follows:

pt= Pt +b (2)

where pt is the gross (or market) price of the extracted resource, Pt is net price of the resource (unextracted), and b – the marginal extraction cost. According to the equation (2), if the marginal cost of extraction is falling, pt might be falling even though Pt is rising. So, evidence of falling market prices cannot, in itself, be regarded as invalidating the Hotelling principle.[34]

This suggests that the right data to use is the resource net price, but this is an unobservable variable as well as i. So it’s possible to construct a proxy for it, by subtracting marginal costs from the gross market price to arrive at the net price. This difficult approach was pursued by a number of researchers. Slade made one the earliest studies of this type. She concluded that some resources have U-shaped quadratic price paths, having fallen in the past due to changes in demand or costs of extraction, but later rising.[35] The other study of this type is by Stollery’s, which generally supported the Hotelling hypothesis with an example of the nickel market by calculating the resource rent per ton of nickel.[36] Thirdly, Halvorsen and Smith tested the theory and concluded, that “using data for the Canadian metal mining industry, the empirical implications of the theory of exhaustible resources are strongly rejected”.[37]

If it can be shown that prices for exhaustible resource did not rise at the rate i, it does not necessarily mean that Hotelling’s rule is not right. There are several circumstances where the resource prices may fall over time even where Hotelling’s rule is being followed. For example, a sequence of new mineral discoveries could lead to a downward-sloping path of the resource’s net price. Pindyck first demonstrated that in his seminal paper. If the resource extraction takes place in non-competitive markets, the net price will also rise less quickly than the discount rate. And in the presence of technical progress continually reducing extraction costs, the market price may fall over time, thereby contradicting a simple Hotelling rule.[38]

Named before facts show numerous contradictions which researchers face while dealing with Hotelling’s rule. But inspite of all these problems, the theory remains appealing. In their conclusion, Devarajan and Fisher state that Hotelling’s article is “the sole source of work in a vigorously growing branch of economics”.[39] Solow stated that, “Good theory is usually trying to tell you something, even if it is not the literal truth”.[40] So although the economics of exhaustible resources does not cover the real world of mining and mineral extraction to any large extent, it is still worthwhile to re-examine the theory. Also, many studies relaxed the assumptions of Hotelling, which introduced flexibility and widened the scope of the model applications.[41]

Next some of the most important factors influencing the Hotelling model will be discussed.

As can be clearly seen from formula 1, the main variable is the price of the resource. On what does it depend? Which parameters function is it? As in the thesis will be considered a single mine case, in the discussion we take into consideration mainly single mine factors, which are:

§ scarcity rent ( see section 2.1.3)

§ cost of extraction (see section 2.1.4)

§ uncertain reserves – the amount of the resource left in the mine, discovery of new reserves (see section 2.1.5)

§ demand in the market (see section 2.1.6)

§ technological progress (see section 2.1.7)

§ “backstop” technologies (see section 2.1.8)

§ market structure: competitive (see section 3.3.1) or monopoly (see section 3.3.2)

Now we have a closer look at these parameters, since further description of the scenarios in different markets might require taking some of the facts into consideration.

2.1.3. Resource Scarcity

Hotelling’s rule is determining the price of exhaustible resource and the extraction path of it. This price, along with other costs, covers resource scarcity, and a large part of the Hotelling’s theory is dedicated to resource scarcity. Since it may influence the price of the resource and the extraction path, we discuss it more in details.

Worries about resource scarcity can be traced back to medieval times in Britain, and have surfaced periodically ever since. The scarcity of land was central to the theories of Malthus and other classical economists.

What do we mean by resource scarcity? One use of the term – to be called absolute scarcity – holds that all resources are scarce, as the availab­ility of resources is fixed and finite at any point in time, while the wants which resource use can satisfy are not limited.[42]

But this is not the usual meaning of the term in general discussions about natural resource scarcity. In these cases, scarcity tends to be used to indicate that the natural resource is becoming harder to obtain, and requires more of other resources to obtain it. The relevant costs to include in measures of scarcity are both private and external costs. It is important to recognize that, if private extraction costs are not rising over time, social costs may rise if negative externalities such as environmental degrada­tion or depletion of common property resources are increasing as a consequence of extraction of the natural resource. Thus, a rising opportunity cost of obtaining the resource is an indicator of scarcity – let us call this use of the term relative scarcity.[43]

There are several indicators that one might use to assess the degree of scarcity of particular natural resources, and natural resources in general including physical indicators (such as reserve quantities or reserve-to-consumption ratios), marginal resource extraction cost, marginal exploration and discovery costs, market prices, and resource rents.

Scarcity is concerned with the real opportunity cost of acquiring additional quantit­ies of the resource. This suggests that the marginal extraction cost of obtaining the resource from exist­ing reserves would be an appropriate indicator of scarcity. Unfortunately, no clear inference about scarcity can be drawn from extraction cost data alone. Barnett and Morse, studying marginal resource extraction costs, found no evidence of increasing scarcity, except for forestry.[44]

The most commonly used scarcity indicator is time-series data on real (that is, inflation-adjusted) market prices. It is here that the affinity between tests of scarcity and tests of the Hotelling principle is most apparent. Market price data are readily available, easy to use and, like all asset prices, are forward-looking, to some extent at least. Use of price data has three main problems. First, prices are often distorted as a consequence of taxes, subsidies, exchange con­trols and other governmental interventions. Reliable measures need to be corrected for such distortions. Secondly, the real price index tends to be very sens­itive to the choice of deflator. Should nominal prices be deflated by a retail or wholesale price index (and for which basket of goods), by the GDP deflator, or by some input price index such as manufacturing wages?[45]

The third major problem with resource price data is that market prices do not in general measure the right thing. An ideal price measure would reflect the net price of the resource. Hotelling’s rule shows that it rises through time as the resource becomes progressively scarcer. But net resource prices are not directly observed variables, and so it is rather difficult to use them as a basis for empirical analysis.[46]

Stern distinguishes two major concepts of scarcity: exchange scarcity and use scarcity. Rents and prices measure the private exchange scarcity of stocks and commodities, respectively, for those wishing to purchase them. They are not necessarily good measures of scarcity for society as a whole or for resource owners. Though originally intended as an indicator of the classical natural or real price, unit cost can be reinterpreted as an indicator of use scarcity. Unit cost or related measures are possible indicators of use scarcity but are not perfect either as a social scarcity indicator – they do not reflect downstream technical improvements in resource use, availability of substitutes, or, as in the case of price, the impact of environmental damage associated with resource extraction and use on welfare. All individual indicators of scarcity have limitations. There is no “correct” way to measure resource scarcity.[47]

2.1.4. Cost of extraction

The cost of extraction of an exhaustible resource is discussed in this section, since these costs, similarly to resource scarcity, are also included in the price of resource. Any changes in them can affect the resource price and the extraction path of it, and further we need to make appropriate assumptions.

A number of researchers have attempted to provide deterministic explanations for deviations from the Hotelling price path based on the properties of the extraction cost function [Solow and Wan (1976), Hanson (1980), and Roumasset, Isaak, and Fesharaki (1983)]. They argue that, holding technology and knowledge of the stock of the resource constant, the most easily accessible sources of the resource will be exploited first. This suggests that extraction costs should rise over time, and this will affect the resource price path [Dasgupta and Heal (1974, 1979)]. However, extraction costs alone-unless changed unexpectedly-do not explain why prices have not risen.[48]

2.1.5. Uncertain Reserves

The change in reserves may influence the resource scarcity value, the price of the resource and demand in the market, any of these changes affects the Hotelling’s rule. We discuss reserves change to have better understanding of it, as then we need to make an assumption about it to construct the model.

Changes in extraction and exploration technology all affect the size of the stock of proven, or extractible, reserves. This uncertainty about the reserve base contrasts with another underlying assumption in the Hotelling model. Constant real appreciation in exhaustible resource prices is derived in this model because the reserve stock is known with certainty (as are the demand function and extraction costs). In practice, however, reserves are not known with certainty and have increased dramatically over time, often in large, discrete leaps.[49]

The effect of uncertain reserves on the optimal depletion path has been examined in a number of studies. An unanticipated shock to reserves can cause a shift among optimal paths. A sudden, unanticipated increase in proven reserves causes the price trajectory to fall to assure full resource exhaustion. Observed prices in these models fall sharply when the discovery is made.[50]

In addition to unanticipated shocks to the reserve base, a number of these models address the impact of endogenous exploration behaviour on the resource price path. As shown by Arrow and Chang, exploration tends to accelerate as the stock of known reserves declines and the price of the resource rises. With major new discoveries, exploration tends to slow until scarcity again becomes important.[51] The implied price path, therefore, is one that rises and falls, with little apparent trend.

As pointed out by Pindyck, uncertainty about the stock of reserves is consistent with observed price behavior, although such uncertainty does not fully explain that behaviour.[52] Clearly, reserve shocks have played an important role in preventing the Limits to Growth scenario from occurring by consistently raising the size of the resource stock. The timing of reserve discoveries and shifts in price trajectories, however, do not coincide precisely as the theory would predict. Announcements of large new deposits have sometimes caused prices to move, but often there is little immediate response.[53]

In any case, the frequency with which shocks to the reserve base have occurred – either because of luck or because of the endogenous response of enhanced exploration activity – raises an important issue regarding the degree to which these resources really are exhaustible. The steady rise in reserves, despite growing demand, which depict a steady upward trend in consumption), may argue for decreasing scarcity value of the resource over time.[54]

D.B. R

Factors Influencing Sanitation Conditions


This thesis examines the socio-cultural and demographic factors influencing sanitation conditions, identifies the presence of Escherichia coli in household drinking water samples and investigates prevalence of diarrhoea among infants. It is based on questionnaire interviews of 120 household heads and 77 caretakers of young children below the age of 5years, direct observation of clues of household sanitation practice as well as analyses of household water samples in six surrounding communities in Bogoso. Data collected was analysed using SPSS and the Pearson Product Moment Correlation Value(R) technique. The findings revealed that the sanitation condition of households improved with high educational attainment and ageing household heads. On the contrary, sanitation deteriorated with overcrowding in the household. Furthermore, in houses where the religion of the head of household was Traditional, sanitation was superior to those of a Christian head and this household also had better sanitary conditions than that with a Moslem head of household. Water quality analysis, indicated that 27 samples out of the 30 representing 90% tested negative for E. Coli bacteria whilst 17(56.7%) samples had acceptable levels of total Escherichia coli. Finally, it was found out that diarrhoea among infants were highly prevalent since 47 (61.04%) out of the 77 child minders admitted their wards had a bout with infant diarrhoea. Massive infrastructural development, supported by behavioural change education focussing on proper usage of sanitary facilities is urgently needed in these communities to reduce the incidence of public health diseases. Intensive health education could also prove vital and such programs must target young heads of household, households with large family size and households whose heads are Christians and Moslems.




Efforts to assuage poverty cannot be complete if access to good water and sanitation systems are not part. In 2000, 189 nations adopted the United Nations Millennium Declaration, and from that, the Millennium Development Goals were made. Goal 4, which aims at reducing child mortality by two thirds for children under five, is the focus of this study. Clean water and sanitation considerably lessen water- linked diseases which kill thousands of children every day (United Nations, 2006). According to the World Health Organization (2004), 1.1 billion people lacked access to an enhanced water supply in 2002, and 2.3 billion people got poorly from diseases caused by unhygienic water. Each year 1.8 million people pass away from diarrhoea diseases, and 90% of these deaths are of children under five years (WHO, 2004).

Ghana Water and Sewerage Corporation (GWSC) had traditionally been the major stakeholder in the provision of safe water and sanitation facilities. Since the 1960’s the GWSC has focussed chiefly on urban areas at the peril of rural areas and thus, rural communities in the Wassa West District are no exception. According to the Ghana 2003 Core Welfare Indicators Questionnaire (CWIQ II) Survey Report (GSS, 2005), roughly 78% of all households in the Tamale Metropolis, 97 percent in Accra, 86% in Kumasi and 94% in Sekondi-Takoradi own pipe-borne water. Once more, the report show that a few households do not own any toilet facilities and depend on the bush for their toilet needs, that is 2.1%, 7.3%, and 5% for Accra, Kumasi, and Sekondi-Takoradi correspondingly. Access to safe sanitation, improved water and improved waste disposal systems is more of an urban than rural occurrence. In the rural poor households, only 9.2% have safe sanitation, 21.1% use improved waste disposal method and 63.0% have access to improved water. The major diseases prevalent in Ghana are malaria, yellow fever, schistosomiasis (bilharzias), typhoid and diarrhea. Diarrhea is of precise concern since it has been recognized as the second most universal disease treated at clinics and one of the major contributors to infant mortality (UNICEF, 2004). The infant mortality rate currently stands at about 55 deaths per 1,000 live births (CIA, 2006).

The Wassa West District of Ghana has seen an improvement in water and sanitation facilities during the last decade. Most of the development projects in the district are sponsored by the mining companies, individuals and some non-governmental organisations (NGO’s). Between 2002 and 2008, Goldfields Tarkwa Mine constructed 118 new hand dug wells (77 of which were fitted with hand pumps) and refurbished 48 wells in poor condition. Also, a total of 44 modern style public water closets, were constructed in their catchment areas. The company also donated 19 large refuse collection containers to the District Assembly and built 6 new nurses quarters. The Tarkwa Mine has so far spent 10.5million US dollars of which 26% went into health, water and sanitation projects, 24% into agricultural development, 31% into formal education and the remaining went into other projects like roads and community centre construction ( GGL, 2008). Golden Star Resources (consist of Bogoso/Prestea Mine and Wassa Mine at Damang) also established the community development department in 2005 and has since invested 800 thousand US dollars. Their projects include 22 Acqua-Privy toilets, 10 hand dug wells (all fitted with hand pumps) and supplied potable water to villages with their tanker trucks (BGL, 2007). Other development partners complimenting the efforts of the central government include NGO’s WACAM, Care International and Friends of the Nation (FON). WACAM is an environmentally based NGO which monitors water pollution by large scale mining companies. They have sponsored about 10 hand dug wells for villages in the district. Care International sponsors hygiene and reproductive health programmes in schools and on radio. They have also donated a couple of motor bicycles to public health workers in the district who travel to villages.

The aims of all these projects were to improve hygiene and sanitation so as to reduce disease transmission. Despite efforts by the development partners, water supply and sanitation related diseases are highly prevalent in the district. Data obtained from the Public and Environmental Health Department of the Ministry of Health (M.O.H., 2008) showed that the top ten most prevalent diseases in the district include malaria, acute respiratory infections, skin diseases and diarrhoea. The others are acute eye infection, rheumatism, dental carries, hypertension, pregnancy related complications and home/occupational accidents. A lot more illnesses occur but on a lower scale and these include intestinal worms, coughs and typhoid fever. A complete data on the top ten diseases prevalent in the district is attached as Appendix D but below is a selection of the illnesses that directly result from bad water and sanitation practices.

The number of malaria cases decreased from 350 in 2006 to 300 cases per 1000 population in 2008. Despite the decrease, the values involved are still quite high. The incidence of diarrhoea among infants and acute respiratory infection remained 30 and 60 cases per 1,000 populations respectively. This can be attributed to several reasons, including population boom, lack of uninterrupted services and inadequate functioning facilities. In fact, according to the World Health Organization (WHO, 2004), an estimated 90% of all incidence of diarrhoea among infants can be blamed on inadequate sanitation and unclean water. For example, in a study of 11 countries in Sub-Saharan Africa, only between 35-80% of water systems were operational in the rural areas (Sutton, 2004). Another survey in South Africa recognized that over 70% of the boreholes in the Eastern Cape were not working (Mackintosh and Colvin, 2003). Further examples of sanitation systems in bad condition have also been acknowledged in rural Ghana, where nearly 40% of latrines put up due to the support of a sanitation program were uncompleted or not used (Rodgers et al., 2007). The author had a personal communication with the District Environmental Officer and he estimated that, approximately there are 224 public toilets, 560 hand dug wells, 1,255 public standpipes and 3 well managed waste disposal sites in the district. According to the 2006 projection, the population of the district is expected to reach 295,753 by the end of the year 2009 (WWDA, 2006).

Development partners in the past have concentrated their efforts on facilities provision only. They have not looked well at the possible causes of the persistence of disease transmission despite the effort they are making. Relationships between household’s socio cultural demographic factors and people’s behaviour with respect to the practice of hygiene could prove an essential lead to the solution of the problem. The fact is, merely providing a water closet does not guarantee that it could be adopted by the people and used well to reduce disease transmission. Epidemiological investigations have revealed that even in dearth supply of latrines, diarrhoeal morbidity can be reduced with the implementation of improved hygiene behaviours (IRC, 2001: Morgan, 1990). Access to waste disposal systems, their regular, consistent and hygienic use and adoption of other hygienic behavioural practices that block the transmission of diseases are the most important factors. In quite a lot of studies from different countries, the advancement of personal and domestic hygiene accounted for a decline in diarrhoeal morbidity (Henry and Rahim, 1990). The World Bank, (2003) identifies the demographic characteristics of the household including education of members, occupation, size and composition as influencing the willingness of the household to use an improved water supply and sanitation system. Education, especially for females results in well spaced child birth, greater ability of parents to give better health care which in turn contribute to reduced mortality rates among children under 5years (Grant, 1995). In a study into water resource scarcity in coastal Ghana, Hunter (2004) identified a valid association between household size, the presence of young children and the gender of the household head. It was noted that, female heads were less likely to collect water in larger households. Furthermore, increasing number of young children present increased the odds of female head/spouse being the household water collector. Cultural issues play active part in hygiene and sanitation behaviour especially among members of rural communities. For example, women are hardly seen urinating in public due to a perceived shame in the act but men can be left alone if found doing it. Also, the act of defecation publicly is generally unacceptable except when infants and young children are involved. The reason is that the faeces from young people are allegedly free from pathogens and less offensive (Drangert, 2004). Ismail’s (1999) work on nutritional assessment in Africa, detected that peoples demographic features, socioeconomic and access to basic social services such as food, water and electricity correlate significantly to their health and nutrition status. Specifically, factors such as age, gender, township status and ethnicity, which are basic to demography, can play a role in the quality of life especially of the elderly.

This research assessed people’s practice of personal hygiene in Bogoso and surrounding villages. It also identified the common bacteria present in household stored water sources. Furthermore, the research identified the relationships between some socio-cultural demographic factors of households and the sanitation practice of its members.


The Wassa West District in the Western Region is home to six large scale mining companies and hundreds of small scale and illegal mining units. Towns and villages in the district have been affected by mining, forestry and agricultural activities for over 120 years (BGL EIS, 2005). Because of this development, the local environment has been subjected to varying degrees of degradation. For example, water quality analysis carried out in 1989 by the former Canadian Bogoso Resources (CBR) showed that water samples had Total coliform bacteria in excess of 16 colonies per 100ml (BGL EIS,2005). Most of the water and sanitation programs executed in the district exerted little positive impact and thus, diarrhoeal diseases are still very high in the towns and villages (See Appendix D on page 80).

However, in order to solve any problem it is important to appreciate the issues that contribute to it; after all, identifying the problem in itself is said to be a solution in disguise. Numerous health impact research have evidently recognized that the upgrading of water supply and sanitation alone is generally required but not adequate to attain broad health effects if personal and domestic hygiene are not given equivalent prominence (Scherlenlieb, 2003). The troubles of scarce water and safe sanitation provisions in developing countries have previously been dealt with by researchers for quite some time. However, until recent times they were mostly considered as technical and/or economic problems. Even rural water and sanitation issues are repeatedly dealt with from an entirely engineering point of view, with only a simple reference to social or demographic aspects.

Therefore, relatively not much is proven how the socio-cultural demographic influences impinge on hygiene behaviour which in turn influences the transmission of diseases. The relationship between household socio cultural factors and the sanitation conditions of households in the Wassa West District especially the Bogoso Rural Council area has not been systematically documented or there is inadequate research that investigates such relationship.


The following research questions were posed to help address the objectives;

  1. Why are the several sanitation intervention projects failing to achieve desired results?
  2. Why is the prevalence of malaria and diarrhea diseases so high in the district?
  3. What types of common bacteria are prevalent in the stored drinking water of households?


The main aim of this research was to investigate people’s awareness and practice of personal hygiene, access to quality water and sanitation and the possible causes of diarrhoeal diseases and suggest ways to reduce the incidence of diseases in the community. The specific objectives were;

  1. To assess the quality of stored household drinking water
  2. To establish the extent to which sanitation behaviour is affected by household socio-cultural demographic factors like age and education level of the head.
  3. To investigate the occurrence of diarrhoea among young children (0-59 months old) in the households.
  4. To identify and recommend good intervention methods to eliminate or reduce the outbreak of diseases and improve sanitation.


In addition to the above objectives, the following hypotheses were tested;

  1. Occurrence of infant diarrhoea in the household is independent on the educational attainment of child caretakers.
  2. There is no relationship between households’ background factors and the sanitation conditions of the household.



In this chapter, various literature related to the subject matter of study are reviewed. Areas covered are sanitation, hygiene, water quality and diarrhoeal diseases. Theories and models the study contributed to include USAID’s Sanitation Improvement Framework, the “F diagram” by Wagner and Lanois and the theory of Social learning.


Until recently, policies of many countries have focused on access to latrines by households as a principal indicator of sanitation coverage, although of late there has been a change and an expansion in understanding the term sanitation. Sanitation can best be defined as the way of collecting and disposing of excreta and community liquid waste in a germ-free way so as not to risk the health of persons or the community as a whole (WEDC, 1998).

Ideally, sanitation should end in the seclusion or destruction of pathogenic material and, hence, a breach in the transmission pathway. The transmission pathways are well known and are potted and simplified in the “F diagram” (Wagner and Lanois 1958) shown below by figure 3.1. The more paths that can be blocked, the more useful a health and sanitation intervention program will be.

It may be mentioned that the health impact indicators of sanitation programmes are not easy to define and measure, particularly in the short run. Therefore, it seems more reasonable to look at sanitation as a package of services and actions which taken together can have some bearing on the health of a person and health status in a community.

According to IRC (2001:0), issues that need to be addressed when assessing sanitation would include:

  • How complete the sanitation programme is in addressing major risks for transmitting sanitation-related diseases;
  • Whether the sanitation programme adopted a demand driven approach, through greater people’s participation, or supply driven approach, through heavy subsidy;
  • Whether it allows adjustment to people’s varying needs and payment;
  • If the programme leads to measurably improved practices by the majority of men and women, boys and girls;
  • If it is environmentally friendly. That is; if it does not increase or create new environmental hazards (IRC, 2001)

Sanitation is a key determinant of both fairness in society and society’s ability to maintain itself. If the sanitation challenges described above cannot be met, we will not be able to provide for the needs of the present generation without hindering that of future generations. Thus, sanitation approaches must be resource minded, not waste minded.


Hygiene is the discipline of health and its safeguarding (Dorland, 1997). Health is the capacity to function efficiently within one’s surroundings. Our health as individuals depends on the healthfulness of our environment. A healthful environment, devoid of risky substances allows the individual to attain complete physical, emotional and social potential. Hygiene is articulated in the efforts of an individual to safeguard, sustain and enhance health status (Anderson and Langton, 1961).

Measures of hygiene are vital in the fight against diarrhoeal diseases, the major fatal disease of the young in developing countries (Hamburg, 1987). The most successful interventions against diarrhoeal diseases are those that break off the transmission of contagious agents at home. Personal and domestic hygiene can be enhanced with such trouble-free actions like ordinary use of water in adequate quantity for hand washing, bathing, laundering and cleaning of cooking and eating utensils; regular washing and change of clothes; eating healthy and clean foods and appropriate disposal of solid and liquid waste.

Diarrheal Dise ases

Diarrhoea can be defined in absolute or relative terms based on either the rate of recurrence of bowel movements or the constancy (or looseness) of stools (Kendall, 1996). Absolute diarrhoea is having more bowel movements than normal. Relative diarrhoea is defined based on the consistency of stool. Thus, an individual who develops looser stools than usual has diarrhoea even though the stools may be within the range of normal with respect to consistency.

According to the United States Centre for Disease Control and Prevention (CDC, 2006), with diarrhoea, stools typically are looser whether or not the frequency of bowel movements is increased. This looseness of stool which can vary all the way from slightly soft to watery is caused by increased water in the stool. Increased amounts of water in stool can occur if the stomach and/or small intestine produce too much fluid, the distal small intestine and colon do not soak up enough water, or the undigested, liquid food passes too quickly through the small intestine and colon for them to take out enough water. Of course, more than one of these anomalous processes may occur at the same time. For example, some viruses, bacteria and parasites cause increased discharge of fluid, either by invading and inflaming the lining of the small intestine (inflammation stimulates the lining to secrete fluid) or by producing toxins (chemicals) that also fire up the lining to secrete fluid but without causing inflammation. Swelling of the small intestine and/or colon from bacteria or from ileitis/colitis can increase the haste with which food passes through the intestines, reducing the time that is available for absorbing water. Conditions of the colon such as collagenous colitis can also impede the capacity of the colon to soak up water.

Escherichia coli O157:H7 is probably the most dreaded bacteria today among parents of young children. The name of the bacteria refers to the chemical compounds found on the bacterium’s surface. Cattle are the main sources of E. coli O157:H7, but these bacteria also can be found in other domestic and wild mammals. E. coli O157:H7 became a household word in 1993 when it was recognized as the cause of four deaths and more than 600 cases of bloody diarrhoea among children under 5years in North-western United States (US EPA, 1996). The Northwest epidemic was traced to undercooked hamburgers served in a fast food restaurant. Other sources of outbreaks have included raw milk, unpasteurized apple juice, raw sprouts, raw spinach, and contaminated water. Most strains of E. coli bacteria are not dangerous however, this particular strain attaches itself to the intestinal wall and then releases a toxin that causes severe abdominal cramps, bloody diarrhoea and vomiting that lasts a week or longer. In small children and the elderly, the disease can advance to kidney failure. The good news is that E. coli O157:H7 is easily destroyed by cooking to 160F throughout.

Reducing diarrhoea morbidity with USAID’s Framework

To attain noteworthy improvement in reducing the number of deaths attributed to diarrhoea, its fundamental causes must be addressed. It is approximated that 90% of all cases of diarrhoea can be attributed to three major causes: insufficient sanitation, inadequate hygiene, and contaminated water (WHO 1997). According to USAID, for further progress to be made in the fight against diarrhoea, the concentration will need to include prevention, especially in child health programs. The first method, case management of diarrhoea, has been tremendously successful in recent years in reducing child mortality. The primary process of achieving effect has been through the initiation and operation of oral rehydration therapy; i.e. the dispensation of oral rehydration solution and sustained feeding (both solid and fluid, including breast milk).

In addition, health experts have emphasized the need for caretakers to become aware of the danger signs early in children under their care and to obtain suitable, appropriate care to avoid severe dehydration and death. The second approach, increasing host resistance to diarrhoea, has also had some victory with the enhancement of a child’s nutritional status and vaccination against measles, a familiar cause of diarrhoea. The third element is prevention through hygiene improvement. Although the health care system has dealt comprehensively with the symptoms of diarrhoea, it has done insufficiently to bring down the overall incidence of the disease. Despite a drop in deaths owing to diarrhoea, morbidity or the health burden due to diarrhoea has not decreased, because health experts are treating the symptoms but not addressing the causes. Thus, diarrhoea’s drain on the health system, its effects on household finances and education, and its additional burden on mothers has not been mitigated. Programs in several countries have confirmed that interventions can and do reduce diarrhoea morbidity. A critical constituent of successful prevention efforts is an effective monitoring and appraisal strategy.

In order to reduce transmission of faecal-oral diseases at the household level, for example, an expert group of epidemiologist and water supply and sanitation specialist concluded that three interventions would be crucial. These are:

  • Safer disposal of human excreta, particularly of babies and people with diarrhoea.
  • Hand washing after defecation and handling babies’ faeces and before feeding, eating and preparing food, and;
  • Maintaining drinking water free from faecal contamination in the home and at the source (WHO, 1993).

Studies on hand washing, as reported in Boot and Cairncross (1993), confirm that it is not only the act of hand washing, but also how well hands are washed that make a difference. To prevent diarrhoea, its causes must first be fully tacit. According to the USAID’s hygiene improvement framework, a thorough approach to diarrhoea at the national level must tackle the three key elements of any triumphant program to fight disease. These are; contact with the necessary hardware or technologies, encouragement of healthy behaviours, and assistance for long-term sustainability. The concept is explained by figure 3.3 below;

The first part, water supply systems, addresses mutually the issue of water quality and water quantity, which reduces the risk of contamination of food and drink. Similarly, ensuring access to water supply systems can greatly ease the time women spend collecting water, allowing more time to care for young children and more time for income generating activities. The third element, household technologies and materials, refers to the increased accessibility to such hygiene supplies as soap (or local substitutes), chlorine, filters, water storage containers that have restricted necks and are covered, and potties for small children. The second element of the hardware component, toilet facilities, involves providing facilities to dispose off human excreta in ways that safeguard the environment and public health, characteristically in the form of numerous kinds of latrines, septic tanks, and water-borne toilets. Sanitation reporting is important because faecal contamination can spread from one household to another, especially in closely populated areas.


Water quality is defined in terms of the chemical, physical, and biological constituents in water. The word “standards” is used to refer to legally enforceable threshold values for the water parameters analyzed, while “guidelines” refer to threshold values that are recommended and do not have any regulatory status. This study employs the world health organization (WHO) and the Ghana standards board (GSB) “standards” and “guidelines” in determining the quality of water.

Water Quality Requirements for Drinking Water – Ghana Standards

The Ghana Standards for drinking water (GS 175-Part 1:1998) indicate the required physical, chemical, microbial and radiological properties of drinking water. The standards are adapted from the World Health Organizations Guidelines for Drinking Water Quality, Second Edition, Volume 1, 1993, but also incorporate national standards that are specific to the country’s environment.

Physical Requirements

The Ghana Standards set the maximum turbidity of drinking water at 5 NTU. Other physical requirements pertain to temperature, odour, taste and colour. Temperature, odour and taste are generally not to be “objectionable”, while the maximum threshold values for colour are given quantitatively as True Colour Units (TCU) or Hazen units. The Ghana Standards specify 5 TCU or 5 Hazen units for colour after filtration. The requirements for pH values set by the Ghana Standards for drinking water is 6.5 to 8.5 (GS 175-Part1:1998).

Microbial Requirements

The Ghana Standards specify that E.coli or thermotolerant bacteria and total coliform bacteria should not be detected in a 100ml sample of drinking water (0 CFU/100ml). The Ghana Standards also specify that drinking water should be free of human enteroviruses.

WHO Drinking Water Guidelines

Physical Requirements

Although no health-based guideline is given by WHO (2006) for turbidity in drinking water, it is recommended that the median turbidity should ideally be below 0.1 NTU for effective disinfection.

Microbial Requirements

Like the Ghana Standards, no E.coli or thermotolerant bacteria should be detected in a 100 ml sample of drinking water.

Water Related Diseases

Every year, water-related diseases claim the lives of 3.4 million people, the greater part of whom are children (Dufour et. al, 2003). Water-related diseases can be grouped into four categories ( Bradley, 1977) based on the path of transmission:

  • waterborne diseases,
  • water-washed diseases,
  • water-based diseases,
  • insect vector-related diseases.

Waterborne diseases are caused by the ingestion of water contaminated by human or animal faeces or urine containing pathogenic bacteria or viruses. These include cholera, typhoid, amoebic and bacillary dysentery and other diarrhoeal diseases. Water washed diseases are caused by poor personal hygiene and skin or eye contact with contaminated water. These include scabies, trachoma and flea, lice and tick-borne diseases. Water-based diseases are caused by parasites found in intermediate organisms living in contaminated water. These include dracunculiasis, schistosomiasis and other helminths. Water related diseases are caused by insect vectors, especially mosquitoes that breed in water. They include dengue, filariasis, malaria, onchocerciasis, trypanosomiasis and yellow fever.

The Theory of Social Learning

Learning is any relatively permanent change in behaviour that can be attributed to experience (Coon, 1989). According to the social learning theory, behavioural processes are directly acquired by the continually dynamic interplay between the individual and its social environment (Mc Connell, 1982). For example, children learn what to do at home by observing what happens when their siblings talk back to their parents or throw rubbish into the household compound.

The learning process occurs through reinforcement and punishment. Reinforcement refers to any event that increases chances that a response will occur again (Coon, 1989). Reinforcement and punishment can be learned through education where the person can read about what happens to people as a result of actions they make. The elementary unit of society is the household and this can be defined as a residential group of persons who live under the same roof and eat out of the same pot (Friedman, 1992). Social learning is necessary for the household in acquiring the skills pertinent to the maintenance of health promoting behaviour. Most of our daily activities are learned in the household. Individuals begin to learn behaviour patterns from childhood by observing especially the parents and later on their siblings.

The environment is understood as comprising the whole set of natural or biophysical and man-made or socio-cultural systems, in which man and other organisms live, work or interact (Ocran, 1999). The environment is human life’s supporting system from which food, air and shelter are derived to sustain human life. Humans interact with the physical and man-made environment and this interaction creates a complex, finely balanced set of structures and processes, which evolve over the history of a people. These structures and processes determine the culture of the society, their social behaviour, beliefs and superstition about health and diseases. Social relationships seem to protect individuals against behavioural disorders and they facilitate health promoting behaviour (Barlow and Durand, 1995; Ho

Environmental Threats to Coastal Communities

Environmental Threats to Coastal Communities in the Coral Triangle: Patterns, Responses, and the Path Forward


This research paper serves to illustrate the detrimental impacts of environmental threats including overfishing, pollution and climate change on coastal communities in the Coral Triangle region. The paper begins with a brief introduction and background, illustrating the significance of this region to conservation efforts. It then details several critical issues facing the island countries that make up the Coral Triangle and how each issue impacts the local population. Next, the paper discusses responses to these environmental threats, focusing on two individual case studies along with a key regional initiative. The paper concludes with a summary of the information presented and recommendations for future action.


The Coral Triangle is a regional area located in the western Pacific Ocean that contains the most diverse coral reef species in the world. The Coral Triangle covers six million square kilometers of coastal waters across six unique countries: Indonesia, Malaysia, Papua New Guinea, the Philippines, the Solomon Islands, and Timor-Leste. Over 120 million people live in coastal communities (within 100 km of the coast) in the Coral Triangle region, relying on coastal waters for their livelihoods and as a source for food (Pomeroy et al., 2015). This paper details the dangers posed by environmental threats including overfishing and destructive fishing, pollution, and climate change for coastal communities in the Coral Triangle, including their potential consequences for the region, actions taken to date, and recommendations for the future. Through two case studies, the paper illustrates best practices and lessons learned at the local level. A study of Malaysia highlights national and state level policy responses enacted in response to environmental threats in a country with a high human development index (HDI) and annual gross domestic product (GDP). A separate case study on the Solomon Islands serves to illustrate the potential response to these threats in a lower resource setting. Both case studies identify lessons learned along with common pitfalls and barriers to success. The paper then details a key regional initiative meant to streamline resources and ensure coordinated efforts to combat shared threats. Finally, the paper concludes with a brief summary of findings and recommendations for Coral Triangle communities moving forward.


Human beings have long contributed to the destruction of the environment; however, this damage has increased exponentially during the era of globalization. Developed nations, including the United States, are continuing along a pattern of overconsumption and resource exploitation, while less developed nations are industrializing with common disregard to the environment. As the global population continues to rise, the scale of human-led destruction will only worsen further.  Although climate change is a global issue which will impact every country, in few places are the immediate stakes as high as in the Coral Triangle. The Coral Triangle countries are particularly vulnerable to the effects of climate change due to their relative size, level of food insecurity, geographical positioning in an area that is highly susceptible to extreme weather events, and dependence on coastal and marine biodiversity as an income and food source (Valmonte-Santos, Rosegrant, & Dey, 2016).

The waters of the Coral Triangle house a wide range of species, including coral, fish, and other marine life. Over half of the world’s coral reefs are located in the Coral Triangle, including 76% of known coral species and 37% of all known fish species (Pomeroy et al., 2015).  This rich biodiversity makes the Coral Triangle a global conservation priority. Beyond global environmental considerations, though, are the immediate consequences that the continued degradation of the Coral Triangle poses to local coastal communities.  Damaging practices including overfishing and pollution are contributing to rapid losses in species diversity, threatening the main source of protein and income for many. The destruction of reefs and other marine barriers leaves coastal communities vulnerable to rising sea water. Impacts of climate change, including increased ocean acidification, warmer ocean temperatures, and more frequent and destructive extreme weather events compound these issues.  The consequences of deteriorating coastal biodiversity are already becoming evident. Without a coordinated effort to reverse current trends, the impacts of environmental degradation on coastal communities who rely on these waters for their livelihood and sustenance will only worsen moving forward.

The six countries positioned within the Coral Triangle represent socially and economically diverse populations. Coastal communities in the Coral Triangle have thrived for thousands of years living off the sea. Fortunately, the majority of these countries recognize the immediate threat that continued environmental degradation poses to their national interests. Higher-income countries including Malaysia and the Philippines have the ability to dedicate significant resources to restoring their coastal waters and preventing further damage.  Governments and local communities in lower resource settings including the Solomon Islands and Timor-Este are equally committed to preventing the further damage. In addition to national and community level responses, both high and low-income countries in the Coral Triangle must work together through regional initiatives focused on protecting their collective coastal resources and safeguarding the future for coastal communities.  

Critical Issues

Overfishing and Destructive Fishing

Coastal communities in the Coral Triangle rely on local marine resources as a source of income and food. Local sources of income tied to the coastal environment include fishing, nature tourism, and marine trade, among others. Overfishing and destructive fishing represent the “most significant local threats to coral reef ecosystems in the Coral Triangle region” (Huang & Coelho, 2017). Overfishing occurs when fish are caught in such large quantities and in such a rapid fashion that species cannot adequately replenish. In the Coral Triangle, overfishing is driven by both increased local demand and demand for fish from countries that lie outside the region (WWF, 2009).  Destructive fishing methods are typically employed in low-resource areas that are also overfished. Local fishermen who lack the financial means to procure traditional fishing equipment turn to destructive methods such as dynamite or poison.  As coral reefs are fragile ecosystems, overfishing and destructive fishing methods result in significant and long-lasting harm. Overfishing reduces the resiliency of reefs to adapt to stressors, including disease and increasing ocean temperatures due to climate change. Destructive fishing methods, including poison and dynamite, destroy coral reefs at staggering levels.

Despite laws meant to curb these destructive practices, both overfishing and destructive fishing methods remain pervasive throughout the Coral Triangle. In contrast to large-scale commercial fishing, local fishermen typically rely on fish that can be caught closer in to the coast. If these coastal species are overfished, local fishermen lack the means to search deeper into the ocean to replace their lost yields. Additionally, as income and population continue to rise in these communities, the demand for fish as a source of protein is also projected to increase (Points & Robertson, 2017). This increase in demand will be unsustainable if nothing is done to protect the local species from being overfished to the point of collapse. Overfishing and destructive fishing also damage coral reefs and contribute to coastal destruction, which in turn hampers the local tourism industry. Overfishing and destructive fishing methods therefore impact not only the ability for locals to make a living, but also threaten the food security of local and regional communities who rely on this fish as a source of food.

Various laws and regulations have been put into place at the regional, national and local level aimed at curbing overfishing and destructive fishing practices. Local governments have worked in concert with international donors and non-governmental organizations (NGOs) to introduce sustainable fishing practices that provide an alternative to traditional and destructive methods. In 2010, the Asian Development Bank (ADB) began working with several countries in the Coral Triangle region, including the Solomon Islands and Timor-Este, to improve their overall management of coastal and marine resources while simultaneously improving food security. Alternative approaches promoted by the ADB initiative included aquaculture, low-cost fish-aggregating devices (FADs), and improved natural resource management (NRM) through marine protected areas (MPAs) and other means. A study was undertaken by the International Food Policy Research Institute from 2011 – 2013 to measure the impact of the strategies and approaches implemented through the ADB project. Results from the study confirmed the potential for high returns on investments of fisheries development strategies, particularly in relation to NRM approaches and the deployment of low-cost inshore FADs. The study predicted that in the Solomon Islands alone, an annual investment of $230,000 on FADs could potentially generate a yearly income of more than $5 million in 2035 (Points & Robertson, 2017). These and other interventions provide a clear basis for the deployment of sustainable fishing practices in order to reduce overfishing and destructive fishing methods. Sustainable fishing will contribute to protecting both coral and marine species along with the livelihoods and food sources of coastal communities.


Another significant threat to the Coral Triangle and its coastal communities is pollution of the air, land, and water. Land-based pollution is transported via rivers and wind, while marine-based pollution occurs due to marine dredging, mining, dumping and shipping (Todd, Ong, & Chou, 2010). A large quantity of marine-based pollution stems from the bustling trade routes located within the Coral Triangle region. Shipping traffic results in oil spills, trash disposal, ballast waste, and pollution from ports. Unsurprisingly, marine-based pollution in the Coral Triangle is highest in the most heavily populated areas of Indonesia and Malaysia where marine trade is the most active (WWF, 2009). The rapid development and urbanization of coastal lands in Coral Triangle countries has also contributed to the pollution of local waters. Coastal development and logging operations have increased the scale of watershed-based pollution, sending nutrient fertilizer runoff, sewage and polluted sediment into coastal waters. Runoff from land-based activities is intensified in many Coral Triangle countries due to heavy rains and steep hills stemming from its geographical location (WWF, 2009).

Pollution in the Coral Triangle contributes to the death of coral and marine species, which in turn damages the livelihoods of those living in coastal communities. Increased development with a lack of foresight and sustainable urban planning leads to overcrowded coastal communities with inadequate infrastructure to appropriately manage potential pollutants. Tourism-related development also negatively impacts coastal communities, contributing to runoff and pollution during the construction and management phase. According to the World Resources Institute (WRI), “development along the coast threatens more than 30 percent of the Coral Triangle Region’s reefs, with more than 15 percent of reefs under high threat” (Burke et al., 2012).   Individuals living in coastal communities also contribute to the pollution problem through improper waste management and poor hygiene practices.

Pollution results in a marked drop in water quality, creating unsustainable conditions for coral and marine life. Increases in nutrients in the water from runoff results in eutrophication, which can be highly destructive to coral species. Eutrophication is thought to have been responsible for the destruction of up to 60% of parts of Indonesia’s coral diversity (Todd et al., 2010). Destructive fishing methods using poison and burning of forests for resources such as palm oil also contribute to the pollution of coastal waters, increasing the level of toxins in the water.  Pollution of coastal waters in the Coral Triangle could be greatly reduced with increased efforts towards sustainable development.

Climate Change

The preceding two sections focused on localized environmental threats to the Coral Triangle and its coastal communities. Climate change, in contrast, speaks to a variety of global environmental threats which impact every corner of the globe, including the Coral Triangle. Climate change refers to the human-induced accelerated warming of the planet, stemming from an increase in the amount of greenhouse gases (GHGs) released into the atmosphere. The scale and magnitude of the threat posed by climate change is nearly universally agreed upon by the scientific community, international, regional and local stakeholders across the globe. Specific climate-change related threats as they pertain to the Coral Triangle include increasing sea surface temperatures (SST), ocean acidification (and other chemical alterations), rising sea levels, and increased extreme weather events, among others (Dey, Gosh, Valmonte-Santos, Rosegrant, & Chen, 2016).  Climate change is interrelated with the local environmental threats previously discussed in this paper. The impacts of climate change will be compounded if local coastal communities do not reduce environmentally hazardous practices such as overfishing and destructive fishing while also curbing pollution. Local communities will be forced to adapt to the realities of climate change in order to ensure their ability to survive in the future.

Due to their geographical location, countries located within the Coral Triangle region are particularly vulnerable to the impacts of climate change. The Coral Triangle is already host to some of the warmest SSTs in the world (WWF, 2009). Coastal ecosystems will likely be fundamentally altered due to further increases in SSTs due to climate change. Factors linked to climate change including increased SSTs, ocean acidification, extreme weather events and erratic rainfall will all contribute to the rapid degradation of the local coastal economy (Rosegrant, 2016). Climate change related coastal degradation will result in reduced fish production, leading to fish stock shortages. Increased SSTs will also increase the scale and frequency of coral bleaching events. Coral bleaching occurs when coral species experience stress due to warming waters, and significantly reduces the ability for reefs to recover from disease or additional stresses (Weeks et al., 2014). Coral bleaching may be accelerated by altered chemistry in the ocean due to increased absorption of carbon dioxide resulting from high greenhouse gas emissions. Coral bleaching events reduce the vibrancy and biodiversity of the coral reefs that coastal communities rely on both for marine resources and tourism.

Rising seas and extreme weather events can wreak havoc on coastal communities in the Coral Triangle, particularly in areas where locals reside in homes that are below or only slightly above sea level. The global sea level is steadily increasing, and is projected to rise an additional 30 to 60 cm by 2100. Negative consequences of global sea level rise include changes to coastal wetlands and lowlands, increased coastal flooding and erosion, increased damage from floods and storms, and saltwater intrusion into estuaries and deltas (Mcleod et al., 2010). The high level of coastal populations in the Coral Triangle make these communities especially vulnerable to sea level rise. Coastal communities, or those populations living within 100 kilometers from the coast, make up 61% of the population in Papua New Guinea, 96% in Indonesia, 98% in Malaysia, and 100% in the Philippines, Timor-Este and the Solomon Islands (Mcleod et al., 2010). The combination of sea level rise paired with extreme weather events including tropical cyclones and monsoons has the potential to catastrophically damage coastal communities. This includes not only the further degradation of their environment, but an increase in the number of lives lost due to flooding and other climate related events. The impacts of climate change, paired with additional environmental threats posed by overfishing and pollution, will contribute to negative health outcomes and reduced economic output for the coastal communities in the Coral Triangle. Climate change adaption and coastal management strategies and approaches are necessary for Coral Triangle communities to mitigate the consequences of climate change related threats.

Local and Regional Responses

Case Study: Malaysia

Malaysia is one of the most developed countries in the Coral Triangle region in terms of gross domestic product and socioeconomic status. It is also the third most populated of the six Coral Triangle countries, behind the Philippines and Indonesia, with approximately 30 million inhabitants. Malaysia is ranked highest of all Coral Triangle countries in the Human Development Index (HDI), a tool developed by the United Nations to measure achievement of several critical dimensions of human development. Malaysia is made up of two geographical areas – Peninsular Malaysia and East Malaysia – which together include about 4,600 kilometers (roughly 3000 miles) of coastline and 102,000 square kilometers of sea area (ADB, 2014a).  The majority of the population in Malaysia resides in coastal areas, and this number has only continued to expand in recent years (Salmah & Jammalluddin, 2010). Coastal communities living in Malaysia rely on its coral reefs for tourism revenue and marine resources (fish and other species) as a source of consumption and income. 

Malaysia has invested in conservation of its coastal and marine resources at the national, sub-national and local level through a variety of regulations and initiatives. At the national level, the Government of Malaysia has enacted a series of laws and policies relating to environmental regulation. These include the National Parks Act of 1980, the Exclusive Environmental Zone (EEZ) Act of 1984, Fisheries Act of 1985 (updated in 1993), and the Wildlife Protection Act of 2010, among others. The National Government also established a Malaysian Maritime Enforcement Agency (MMEA) in 2004 to enforce maritime-related laws. National-level policies surrounding biodiversity and resources management include a National Biodiversity Policy, National Environment Policy, and National Policy on Climate Change. State-level policies and laws tend to follow national-level guidance, although some states have more robust policies in place than others. There are several national agencies responsible for managing and enforcing conservation and coastal management efforts, including the Marine Park Department and Department of Fisheries. The Government of Malaysia has also ratified several international laws pertaining to marine resource management including the United Nations Framework Convention on Climate Change and the Kyoto Protocol, among others (ADB, 2014a).  Local and International NGOs are also working together with the national government to enact measures geared towards improving the sustainability of coastal and marine resources.

 Projections based on the current coastal fishing situation in Malaysia predict reduced output due to overfishing.  Despite the wide range of national and state level policies and regulations aimed at sustainable fishing and improved resource management, coordination of efforts and outreach to local stakeholders remains weak. Without local buy in, meaningful change remains a challenge. Additionally, policies and regulations are typically enacted with isolated goals such as managing fisheries or protecting certain species. These unilateral efforts fail to consider the interrelation between various environmental threats. Additional threats to Malaysian coastal resources and communities include rapid urbanization, increased development, and increased tourism. Although these factors are important in the continued socioeconomic development of the country, the benefits are unevenly spread, with low income costal residents seeing the least improvement.

Case Study: Solomon Islands

In contrast to Malaysia, the Solomon Islands are ranked poorly both in terms of human development and economic output. The islands, of which there are nearly 900 in total, are located in the easternmost stretch of the Coral Triangle region. The Solomon Islands are populated by less than 1 million people. Given their relatively small size, 100% of Solomon Island residents are considered to be coastal populations. The islands’ coastal waters contain a broad range of coral and marine species. As with Malaysia and many other Coral Triangle countries, fish is the largest source of protein for the local population (Dey et al., 2016). Continued environmental degradation and climate change will inevitably result in fish stock shortages and other changes to local fisheries, negatively impacting the local economy and health of the people. The Solomon Islands are also at an increased risk for damaging tropical cyclones, which typically do not occur in other areas of the Coral Triangle.

The Solomon Islands manage their marine resources through national level strategies and policy frameworks which include sustainability as a foundation for coral reef use. National level laws geared towards managing resources and promoting sustainability include the Fisheries Act, the Wildlife Protection and Management Act and the Environment Act, all passed in 1998, and the Protected Areas Act which became law in 2010. Despite these regulations at the national level, local compliance remains a challenge. These challenges are due to several factors, including the need for local populations to generate daily income, a poor enforcement structure, and knowledge gaps relating to science-based decision making (ADB, 2014b). The primary mechanism guiding the management of the coastal and marine resources in the Solomon Islands is their National Plan of Action (NPAO), which was developed under the regional Coral Triangle Initiative discussed further in the section below. Through this plan, the government works together with NGOs, development partners, and international donors to implement conservation, education, and public awareness activities at the national and local levels (ADB, 2014b).

Although gaps remain in the linkage between national level initiatives and local communities, there are some grassroots organizations committed to enacting change from the bottom up. One such group, the Kahua Association, is a local non-profit organization that aims to foster participatory development in which decisions are made based on the best interests of the collective whole with respect to the environment. The Association is comprised of local leaders and representatives from the women’s council, youth council, religious leaders and conservation and biodiversity experts.  Local groups including the Kahua Association have the potential to bridge the divide between national level policies and local needs and realities. These groups should be engaged at higher levels to translate policy into action.

Regional Response: The Coral Triangle Initiative on Coral Reefs, Fisheries, and Food Security

The Coral Triangle Initiative (CTI) on Coral Reefs, Fisheries, and Food Security is the most comprehensive regional effort to collectively manage coastal and marine resources within the Coral Triangle. The CTI was codified through a multilateral agreement between all six Coral Triangle countries in 2009, and provides a platform for coordinated responses to shared environmental threats. The CTI is funded in part by international donors and conservation groups including the ADB, the US Agency for International Development (USAID), and the Global Environmental Facility, among others.

The CTI includes an agreed upon Regional Plan for Action along with individual National Plans of Action developed by each implementing country. The regional plan was finalized in 2010, and presents an roadmap for conservation efforts and policy goals for the six countries to aspire to over a ten year period (Berdej, S., Andrachuk, M., Armitage, 2015). The effort has been widely successful in catalyzing an environment for fruitful regional discussions and priority setting. The five main targets outlined in the regional plan include: priority seascapes are designated and effectively managed; an ecosystem approach to management of fisheries and marine resources is fully applied; marine protected areas are established and effectively managed; climate change adaptation measures are achieved; and the status of threatened species is improving (Asian Development Bank, 2014). 

While the CTI is an important step in the regional effort to conserve precious resources, mitigate environmental threats, and build resiliency, it also has its limitations. As the regional plan is a nonbinding agreement, it is constrained by the sovereignty of individual states to carry out (or not) agreed upon standards and approaches. Diverse countries with varying national priorities, economic interests, and local needs may not always be able to find common ground in what policies should be enacted or activities implemented. Additionally, as with national level initiatives in Malaysia and the Solomon Islands, CTI activities are often inadequately linked with local stakeholders. Without a regional enforcement mechanism that treats all offenders equally, many rule breakers lack the incentive to change environmentally damaging behaviors.


The Coral Triangle, made up of six countries and spanning hundreds of thousands of square kilometers in the Western Pacific Ocean, is home to some of the most diverse and ecologically important coral reefs and marine species in the world.  Urbanization and population growth have contributed to densely populated coastal communities who rely on coastal resources to make a living and sustain their families.  Local threats to the sustainability of these coastal communities and the ecosystems they depend on include overfishing and destructive fishing practices along with marine and land based pollution. Global threats to the Coral Triangle region stemming from human induced climate change include rising sea levels, increased extreme weather events, and warming oceans. If left unchecked, these environmental hazards will have enormous consequences for the populations of the Coral Triangle region. Approaches to counter environmental threats to the Coral Triangle must be multidimensional, taking into consideration not only resource management, but also the reduction of pollution, and sustainable development.  National governments should work in concert with local, regional, and global stakeholders to develop and scale context-appropriate approaches.


ADB. (2014a). State of the Coral Triangle: Malaysia.

ADB. (2014b). State of the Coral Triangle: Solomon Islands.

Asian Development Bank. (2014). Regional state of the Coral Triangle – Coral Triangle marine resources: Their status, economies, and Management. Retrieved from

Berdej, S., Andrachuk, M., Armitage, D. (2015). Conservation Narratives and Their Implications in the Coral Triangle Initiative. Retrieved December 1, 2017, from

Burke, L., Reytar, K., Spalding, M., Perry, A., Knight, M., Kushner, B., … White, A. (2012). Reefs at risk: Revisited in the coral triangle. Defenders (Vol. 74).

Dey, M. M., Gosh, K., Valmonte-Santos, R., Rosegrant, M. W., & Chen, O. L. (2016). Economic impact of climate change and climate change adaptation strategies for fisheries sector in Solomon Islands: Implication for food security. Marine Policy, 67, 171–178.

Huang, Y., & Coelho, V. R. (2017). Sustainability performance assessment focusing on coral reef protection by the tourism industry in the Coral Triangle region. Tourism Management, 59, 510–527.

Mcleod, E., Hinkel, J., Vafeidis, A. T., Nicholls, R. J., Harvey, N., & Salm, R. (2010). Sea-level rise vulnerability in the countries of the Coral Triangle. Sustainability Science, 5(2), 207–222.

Points, K. E. Y., & Robertson, D. (2017). ADB BRIEFS of Fisheries Development Strategies, 1(84), 1–10.

Pomeroy, R., Parks, J., Reaugh-Flower, K., Guidote, M., Govan, H., & Atkinson, S. (2015). Status and Priority Capacity Needs for Local Compliance and Community-Supported Enforcement of Marine Resource Rules and Regulations in the Coral Triangle Region. Coastal Management, 43(3), 301–328.


Salmah, Z., & Jammalluddin, S. A. (2010). National Policy Responses to Climate Change: Malaysian Experience. Retrieved from Jamalluddin SHAABAN/National Policy Responses to Climate Change – Malaysian Experience.pdf

Todd, P. A., Ong, X., & Chou, L. M. (2010). Impacts of pollution on marine life in Southeast Asia. Biodiversity and Conservation, 19(4), 1063–1082.

Valmonte-Santos, R., Rosegrant, M. W., & Dey, M. M. (2016). Fisheries sector under climate change in the coral triangle countries of Pacific Islands: Current status and policy issues. Marine Policy, 67, 148–155.

Weeks, R., Pressey, R. L., Wilson, J. R., Knight, M., Horigue, V., Abesamis, R. A., … Jompa, J. (2014). Ten things to get right for marine conservation planning in the Coral Triangle. F1000Research, (0), 1–20.

WWF. (2009). the Coral Triangle and Climate Change :

Gherkin and Pomegranate Cultivation


Horticulture is an important component of agriculture accounting for a very significant share in the Indian economy. Rising consumer income and changing lifestyles are creating bigger markets for high-value horticultural products in India as well as throughout the world. Among these, the most important high-value export products are fruits and vegetables. This study was conducted to analyze the comparative advantage and competitiveness of pomegranate and gherkin which are the important foreign exchange earner among fruit and vegetable crops exported from India.

The primary data was collected from Tumkur and Bijapur district of Karnataka, India and secondary data was collected from concerned government institutions, APEDA and also from exporters of fruits and vegetables. The Policy Analysis Matrix (PAM) was selected as the analytical tool to analyse the export competitiveness, comparative advantage, and the degree of government interventions in the production and export of gherkin and pomegranate. The policy distortions were measured through indicators of PAM. Garret ranking technique was used to analyse the constraints in the production and export of the selected crops.

EPC of Gherkin (0.5) and pomegranate (0.45) values which found to be less than one indicates that producers are not protected through policy interventions. Whereas DRC (0.27 & 0.28) and PCR (0.43 & 0.59) values of Gherkin and Pomegranate respectively shows positive, social as well as private profit which indicates that, India has a competitive and comparative advantage in their production. The result for Garret ranking in case of gherkin shows that skilled labour and lack of superior quality are the major constraints in production and export of gherkin respectively. In case of pomegranate non availability of skilled labour, high incidence of pest and diseases, lack of transportation facilities, high residual effect of pesticide are the major constrain in production and export.

The overall result shows that the cultivation as well as export of gherkin and pomegranate is economically profitable and efficient.

Key Words: Gherkin, Pomegranate, PAM, EPC and DRC

List of Acronyms

Variable Definition
APEDA Agricultural and Processed Food Products Export Development Authority
CIF Cost Insurance and Freight
Crores 10 million
DRC Domestic Resource Cost
EPC Effective Protection Coefficient
EU European Union
FAOSTAT Food and Agriculture Organization Statistics
FOB Free On Border
FYM Farm Yard Manure
ha Hectares
HEIA Horticulture Export Improvement Association
kg Kilogram
MHA Million Hectare
MT Million Tons
NHB National Horticulture Board
NPCI Nominal Protection Coefficient on Inputs
NPCO Nominal Protection Coefficient on Outputs
NPV Net Present Value
PAM Policy Analysis Matrix
PCR Private Cost Ratio
INR Indian Rupees
UAE United Arab Emirates
UK United Kingdom
UNCOMTRADE United Nations Commodity Trade Statistics
UNFAO United Nations Food and Agriculture Organization
USA United States of America

1. Introduction

1.1 Background

Indian agriculture is vested with the herculean responsibility of feeding over more than one billion people. Out of total, 72% of India’s population live in rural areas, further three-fourth of the rural populations depend on agriculture and allied activities for their livelihoods. The present growth in agriculture in India is hassle with problems most importantly, agricultural growth slowed down to 2.1% between 1998-99 and 2004-05. It is largely due to a decline in the food grain sector that grew at merely 0.6%. Given the high dependence of the poor on agriculture, the stagnation in this sector is currently threatening to stall poverty reduction in India (Reddy, 2007).

Given the present scenario, the immediate question to be addressed is how agricultural growth can be accelerated. The question can be answered through by diversifying the consumption pattern towards high value agricultural commodities in general and high value horticultural products in particular such as fruits and vegetables. In recent years there has been a great deal of interest among policymakers and trade analysts in the role of horticultural products as a principle means of agricultural diversification and foreign exchange earnings in developing countries. Horticultural products have high income elasticity of demand as income goes up the demand raises rapidly. It grows especially in middle and high income developing countries. As people are more cautious on health and nutrition, there is a paradigm shift from high fat, high cholesterol foods such as meat and live stock products to low fat and low cholesterol foods such as fruits and vegetables. As a result, the world has changed its attention towards high value agricultural products. Hence, it is crucial to be competitive in the world market to reap the potential gains of increased and growing world demand for horticultural products such as fruits and vegetables. Thus, the purpose of the present study attempts to evaluate the consequences of international trade and competitiveness of Indian horticulture with special reference to pomegranate and gherkin crops. In the recent past, these two crops got high export potential and earned good foreign exchange.


1.2 Studies on export of fruits and vegetables

There are many studies related to export of horticultural crops especially fruits and vegetables from India. Chiniwar (2009) explained the numerous opportunities and challenges of the horticulture sector and observed that there is a tremendous potential for Indian pomegranates in the global market. He examined the growth of pomegranate exports from India. The study revealed that the growth of pomegranate exports from India is moderate in comparision to the potential for its exports. Tamanna et al. (1999) examined the export potential of selected fruits from India by using Nominal Protection Coefficient (NPC). The results indicate that the exports of Indian fruits are highly competitive in the world market. Nalini et al. (2008) observed that India has made tremendous progress in the export of cucumber and gherkin products during the past 15 years (1990-2005). The export has increased by about 129 times with an impressive annual compound growth rate of 37.46 percent, as against only 4.38 percent in the world market. An increasing and high value of Revealed Comparative Advantage (RCA) and a positive and increasing value for Revealed Symmetric Comparative Advantage (RSCA) have indicated high potential for their export. One percent increase in volume of international trade in cucumber and gherkin may increase the demand from India by 5.96 percent. This indicates that India is highly competitive in the export of cucumber and gherkin. It has ample scope to further increase its export. Gulati et al. (1994) analyzed the export competitiveness of selected agricultural commodities and identified the constraints in the export of fresh fruits, vegetables, processed fruits and vegetables.

The above studies are related to export performance, growth, and constraints of fruits and vegetables. Most of these studies focused on aspects pertaining to export of fruits and vegetables. There are no studies on export policy especially related to efficiency and comparative advantage in world market. Therefore, the aim of the present study is to analyze the export competitiveness of pomegranate and gherkin by using Policy Analysis Matrix (PAM). The study has a high scope because competitiveness has become a key issue in the international market for export development of fruits and vegetables.


1.3 Research objectives

In the present study, the export competitiveness of high value horticultural crops of India is analyzed. To be very precise, the study analyzes the competitiveness of gherkin and pomegranate in the world market. It also compares the advantages and constraints in the export of these crops with the following objectives and proposed hypothesis, which will be tested based on the results and conclusion.

Specific objectives

    1. To assess the export competitiveness of Gherkin and Pomegranate


    1. To examine the production and export constraints of Gherkin and Pomegranate


      – Export of gherkin and pomegranate are competitive in international markets


    1.4 Structure of the thesis

    The study contains the results of the analysis of export competitiveness of horticultural crops in India. In the present study, opportunities are analyzed, constraints in production and export of gherkins and pomegranates from India. We further analyze the competitiveness and comparative advantage of these two crops in international market. The detailed information of this analysis is discussed in the following sections of the study.

    The first section of the thesis gives us an introduction and background on the nature of the problem, facts on the dynamics and underlying causes diversifying the consumption pattern of high value horticultural commodities. Further, a brief overview of existing studies on Indian agricultural and horticultural growth, export performance, and constraints will be discussed. The research question is broken down into specific objectives and a possible hypothesis has been put forth.

    The second section of the thesis will give a general overview of fruit and vegetable scenario in the world as well as in India. The section also explains the importance of selected fruit and vegetable by considering production, export and foreign exchange earnings which will help us to understand the export competitiveness of these crops from India.

    The third section deals with methodological framework which deals with the concepts and competitiveness of high value horticultural crops from India focusing on the application of PAM model for the study. In the same chapter, the current literature and outline of the major definitions for competitiveness and comparative advantage are studied. The above proposed model will be used as a tool to address the research objectives followed by data description.

    Fourth section highlights the findings of the research from the proposed model using collected information on pomegranate and gherkin cultivation, and their export. Finally, the proposed hypothesis is tested and the results inferred.

    The final section summarizes the whole research findings and provides meaningful policy implications.


    2. Scenario of fruits and vegetables in India and the world


    2.1 World scenario of fruits and vegetables

    2.1.1 High value agricultural production

    Rising consumer income and changing lifestyles are creating bigger markets for high value agricultural products throughout the world. Among these, the most important high value export sector is horticulture, especially fruits and vegetables. The growing markets for these products present an opportunity for the farmers of developing countries to diversify their production out of staple grains and raise their income. Annual growth rates on the order of 8 to 10 percent in high value agricultural products is promising development (Fig.1), as the production, processing and marketing of these products create a lot of needed employment in rural areas. The rapid growth in high value exports has been part of fundamental and broad reaching trend towards globalization of the agro food system. Dietary changes, trade reform and technical changes in the food industry have contributed to the growth of high value agriculture and trade (World Bank, 2008).

    2.1.2 World production of fruit and vegetables

    The production of fruit and vegetables all over the world grew by 30 percent between 1980 to 1990 and by 56 percent between 1990 to 2003. Much of this growth occurred in China where production grew up by 134 percent in 1980 and climbed to 200 percent by 1990 (UNFAO 2003). At present the world production of fruits and vegetables reached to 512 MT and 946.7MT respectively (Table 1 & 5).

    Vegetables: China is currently the world’s largest producer of vegetables, with the production 448.9 MT with an area of 23.9 MHA (47%) (Table 1), whereas India is in the 2nd position with the production of 125.8 MT with an area of 7.8 MHA (13%) followed by USA (4%), Turkey (3%) etc (Indian Horticulture Database, 2008) (Fig.2). Among the vegetable crops gherkin is considered for the study as it is one of the most important vegetable all over the world. Table 2 shows the international production of cucumber and gherkin from different parts of the world during 2007-08. China, Turkey, Iran, Russia and USA are the world largest producers of cucumber and gherkin (Table 3), whereas India position in the production is 34th but it reached 1st (Table 3) and 55th (Table 4) position in export of provisionally preserved and fresh cucumber gherkin respectively.

    Table 1 Major vegetables producing countries in the world (2007-08)

    Country Area(000 ha) Production(000 MT) Productivity(MT/ha)
    China 23936 448983 19
    India 7803 125887 16
    USA 1333 38075 29
    Turkey 996 24454 25
    Russia 970 16516 17
    Egypt 598 16041 27
    Iran 641 15993 25
    Italy 528 13587 26
    Spain 379 12676 33
    Japan 433 11938 28
    Others 16957 222625 13
    Total 54573 946774  

    Source: Indian Horticulture Database (2008)


    Table 2 International production of cucumber and gherkin (2007-08)

    Country Production (MT) Share (%)
    China 28062000 62.9
    Turkey 1875919 4.21
    Iran, Islamic republic 1720000 3.86
    Russian federation 1410000 3.16
    USA 920000 2.06
    Ukraine 775000 1.74
    Japan 634000 1.42
    Egypt 615000 1.38
    Indonesia 600000 1.34
    Spain 510000 1.14
    Mexico 500000 1.12
    Poland 492000 1.10
    Iraq 480000 1.08
    Netherland 445000 1.00
    India 120000 0.27
    Others 5452024 12.22
    World 44610943 100

    Source: Author, FAO (2008)


    Table 3 Major exporting countries of fresh cucumber and gherkin (2007)

    Country Value (USD) Share (%)
    Spain 557088 30.13
    Mexico 437369 23.65
    Netherland 419824 22.70
    Canada 81707 4.42
    Germany 44437 2.40
    Turkey 40300 2.18
    Greece 38920 2.10
    Iran 27768 1.50
    Belgium 25361 1.37
    USA 16313 0.88
    India 235 0.01
    Others 159815 8.64
    World 1849137  

    Source: Data from Agricultural and Processed food products Export
    development Authority (APEDA), India.


    Table 4 Major exporting countries of preserved cucumber and gherkin

    Country Value (USD) Share (%)
    India 33476 49.39
    China 16754 24.72
    Turkey 4193 6.19
    Netherlands 3397 5.01
    Belgium 2670 3.94
    Vietnam 40300 2.11
    Sri Lanka 1003 1.48
    Germany 925 1.37
    Spain 596 0.88
    USA 992 0.87
    World 65040  

    Source: U.N COMTRADE (2007)

    Fruits: World fruit production has steadily risen for the past four years (see Appendix 3 ). Table 5 shows the largest fresh fruit producers from different countries during 2007-08. China is the world’s largest fruit producer, producing 19 percent of the world fruits. India ranks second in the list of world producer accounting 12 percent of the world’s production followed by Brazil, where 7 percent of the world’s fruit was grown. (Figure 3) As production is increasing in China at alarming rate compare to other top producing countries. Production growth almost averaged 6 percent per year in China, while production growth in India averaged 2.73 percent per year. The EU experienced the lower annual growth rate of 0.89 percent. Whereas, the production in USA and Brazil has been relatively constant over the period, with average annual growth rates of 0.61 percent for the former and 0.34 percent for the later. Other countries Mexico, South Africa and Chile have experienced slightly higher average annual production growth rates of 2.12, 2.56 and 1.3 percent respectively over the same period (FAOSTAT 2008). Among all fruits pomegranate is considered for the present study. Figure 4 shows India is the world largest producer of pomegranate with 900 MT (36%) followed by Iran (31%), Iraq (3%), USA (4%) etc. Over the years India’s export rate for pomegranate has grown steadily to worth of INR0.61 million (US$13741) in 2007-08 with the share of 1.2 percent (Table 6).


    Table 5 Major fruit producing countries in the world (2007-08)

    Country Area(000 ha) Production(000 MT) Productivity(MT/ha)
    China 9587 94418 10
    India 5775 63503 11
    Brazil 1777 36818 21
    USA 1168 24962 21
    Italy 1246 17891 14
    Spain 1835 15293 8
    Mexico 1100 15041 14
    Turkey 1049 12390 12
    Iran 1256 12102 10
    Indonesia 846 11615 14
    Others 22841 208036 9
    Total 48481 512070  

    Source: FAO & Indian Horticulture Database (2008)


    Table 6 Pomegranate export from different parts of the world (2007)

    Country Value (USD) Share (%)
    Thailand 172781 15.06
    Spain 138911 12.11
    Vietnam 84532 7.37
    Mexico 67739 5.91
    Netherlands 63858 5.57
    Madagascar 53822 4.69
    Israel 45219 3.94
    Uzbekistan 44128 3.85
    Colombia 40459 3.53
    Azerbaijan 37977 3.31
    France 36975 3.22
    Germany 17750 1.55
    India 13741 1.20
    Others 309565 27.45
    World 1127457 100

    Source: Agricultural and Processed Food Products Export
    Development Authority (APEDA), India


    2.2 Scenario of fruits and vegetables in India.

    Horticulture is an important component of agriculture accounting for a very significant share in the Indian economy. It is identified as one of the potential sector for harnessing India’s competitive advantage in international trade. Further it prepares India to achieve an overall trade target of 1% or more in the share of world trade. Meanwhile, making the country self-sufficient in the last few decades, horticulture has played a very significant role in earning foreign exchange through export.

    Horticultural crops cover approximately 8.5 percent of total cropped area (20 MHA) (Table 7) with annual production of 207 MT, and productivity of 10.3 MT per hectare during the year 2007-08 (FAO & Indian Horticulture Database 2008). Among the horticultural crops fruits and vegetables play an important role, whereas exports of fruits and vegetables have increased over the years (Table 8). During 2004-05 export of fruits and vegetables was INR 13637.13 million as against INR 24116.57 million during 2006-07 (APEDA, 2008)

    Table 7 Area, production and productivity of horticultural crops in India







    (MT/ha) )

    2001-02 16.6 145.8 8.8
    2002-03 16.3 144.4 8.9
    2003-04 19.2 153.3 21
    2004-05 21.1 170.8 8.1
    2005-06 18.7 182.8 9.8
    2006-07 19.4 191.8 9.9
    2007-08 20.1 207.0 10.3

    Source: FAO & Indian Horticulture Database (2008)


    Table 8 Export of horticultural produce in India

    Products 2004-05 2005-06 2006-07
    Quantity Value Quantity Value Quantity Value
    Floriculture & seeds 34496 2871 42659 3922 50048 7713
    Fresh Fruits & vegetables 1296530 13637 1465040 16587 1983873 24117
    Processed fruits & vegetables 325293 9614 501826 13595 549949 17316
    Total 1656319 261227 2009525 341051 258387 491459

    Source: APEDA, India Note: Qty: MT, value : Million INR

    Vegetables: In vegetable production, India is next to China with a production of 125.8 million tonnes from 7.8 million hectares with a share of 13 percent in relation to world production (Table 9). The per capital consumption of vegetables is 120 grams per day (APEDA 2009). In case of Fresh vegetable India’s export has been increased from INR 433.14 Crore in 2006-07 to Rs 489.49 Crore in 2007-08. Major Export Destinations of these vegetables are UAE, UK, Nepal, and Saudi Arabia. (APEDA, 2009)


    Table 9 Area, production and productivity of vegetable crops in India







    (MT/ha) )

    2001-02 6156 88622 14.4
    2002-03 6092 84815 13.9
    2003-04 6082 88334 14.5
    2004-05 6744 101246 15.0
    2005-06 7213 111399 15.4
    2006-07 7584 115011 15.2
    2007-08 7803 125887 16.1

    Source: FAO & Indian Horticulture Database (2008)

    Among all vegetables gherkin is considered for the present study due to following reasons. India’s export of gherkin has been steadily increased since 1997-98. It accounts for 24,490 tonnes of gherkins having an export potential of INR 50.27 crore as against 35,242 tonnes worth of INR 69.86 crore in 1999-2000 (Venkatesh, 2003). In recent year gherkin export has been increased to 61.5 million tonnes with a trade value of INR1465.5 million during 2007-08 (UNFAO Export Data, 2009).

    2.2.1 Production and export importance of gherkin in India

    Gherkin crop is being selected for the present study. It is regarded as HEIA crop especially a hybrid crop. Gherkin cultivation and processing started in India in the early’ 90s and presently cultivated over 19,500 acres in the three southern states of Karnataka, Tamil Nadu and Andhra Pradesh. Although gherkin can grow virtually in any part of the country, the ideal conditions required for growth prevail in these three states where the growing season extends throughout the year. It requires adequate water and temperature between 15-36 degree centigrade and the right type of soil. The crop takes 85 days to reach the required maturity level. Productivity is approximately four to five tonnes per acre and the best months are from February to March followed by June to August. India is a major exporter of provisionally preserved gherkin. Table 10 & 11 shows the cucumber and gherkin export from India. In India, Karnataka stands first in export, where cultivation is steadily growing since 2001-02 accounting for a worth of INR 1200 million. During 2006-07 gherkins accounts to INR 3133 million which has been exported (Table 12).


    Table 10 Cucumber and gherkin exports from India (2007-08)

    Country Value( Million INR) Quantity (Tonnes) Share (%) )
    UAE 1.96 142.75 17.55
    Bangladesh 1.92 290.00 17.17
    Netherland 1.78 93.10 15.92
    Russia 1.66 83.50 14.91
    Estonia 0.80 43.94 7.17
    Nepal 0.75 74.42 6.75
    Oman 0.75 70.00 6.74
    Spain 0.55 31.82 4.95
    France 0.47 20.21 4.27
    Others 0.51 26.42 4.56
    Total 11.20 876.18 100


Earthquake Simulation for Buildings


Earthquake is an independent natural phenomenon of vibration of the ground which can become dangerous mainly when it is considered in relation with structures. Earthquakes can be very weak, without even realizing them but (they) can also be strong enough to result serious damages to buildings which can lead to injures or even loss of human lives. In order to avoid any structural damage the legislation sets conditions on the building design. For that purpose, Eurocode 8 is established in European countries and sets up all the appropriate criteria and measures for the design of buildings for earthquake resistance (Eurocode 8 is established in Europe and suggests 4 different methods of analysis.) In this project the response of eight buildings is examined (investigated) under seismic excitation. Firstly, is examined the case of four buildings (1 storey, 2 storey, 3 storey and 4 storey) where all the storeys are facsimile (replica). Afterwards, is examined the case of four buildings (again 1-4 storeys) where while the storeys of each building are increased, the mass, the stiffness and the height of each floor are decreased. Both the lateral method of analysis and the modal response spectrum analysis are used as recommended by EC8 to calculate the inter-storey drifts, the total shear forces and the overturning moments at the base of each building. The results are plotted and compared so that useful outcomes can be obtained.

1. Introduction

One of the most frightening and destructive phenomena of nature is a severe earthquake and its terrible aftereffects especially when they are associated with structures. An earthquake is a sudden movement of the Earth, caused by the abrupt release of strain that has accumulated over a long time. Earthquake intensity and magnitude are the most common used parameters in order to understand and compare different earthquake events.( Ή are the most common parameters used to appreciate and compare.)

In recent years have been giving increasing attention to the design of buildings for earthquake resistance.

Specific (particular) legislation is (have been) established to make structures able to resist at any seismic excitation. In Europe, Eurocode 8 explains how to make buildings able to resist to earthquakes, and recommends the use of linear and non-linear methods for the seismic design of the buildings

Simple structures can be modelled either as equivalent single degree of freedom systems (SDOF) or as a combination of SDOF systems.

In this project 8 different buildings with a variation either on the number of storeys or on their characteristics are simulated as a combination of SDOF systems for which the mode shapes and their corresponding eigenfrequencies and periods are calculated. Afterwards the fundamental frequency is obtained for each case and the elastic design is used in order to obtain the base shear forces and the overturning moments. (INELASTIC DESIGN AND LATERAL FORCE METHOD)

2. Literature review

2.1 Introduction to earthquake engineering

Definition and earthquake derivation or generation or creation or production or formation or genesis

The lithosphere is the solid part of Earth which includes or consists of the crust and the uppermost mantle. The sudden movement of the earth’s lithosphere is called earthquake (technical name seism).

Fractures in Earth’s crust where sections of rock have slipped past each other are called Faults. Most earthquakes occur along Faults. Generally, earthquakes are caused by the sudden release of built-up stress within rocks along geologic faults or by the movement of magma in volcanic areas.

The theory of plate tectonics provides geology with a comprehensive theory that explains “how the Earth works.” The theory states that Earth’s outermost layer, the lithosphere, is broken into 7 large, rigid pieces called plates: the African, North American, South American, Australian- Indian, Eurasian, Antarctic, and Pacific plates. Several subcontinental plates also exist, including the Caribbean, Arabian, Nazca, Philippines and Cocos plates.

Boundaries of tectonic plates are found at the edge of the lithospheric plates and can be of various forms, depending on the nature of relative movements. By their distinct motions, three main types can be characterized. The three types are: subduction zones (or trenches), spreading ridges (or spreading rifts) and transform faults.. convergent, divergent and conservative.

At subduction zone boundaries, plates move towards each other and the one plate subducts underneath the other (ή μπορÏŽ να πω: one plate is overriding another, thereby forcing the other into the mantle beneath it.)

The opposite form of movement takes place at spreading ridge boundaries. At these boundaries, two plates move away from one another. As the two move apart, molten rock is allowed to rise from the mantle to the surface and cool down to form part of the plates. This, in turn, causes the growth of oceanic crust on either side of the vents. As the plates continue to move, and more crust is formed, the ocean basin expands and a ridge system is created. Divergent boundaries are responsible in part for driving the motion of the plates.

At transform fault boundaries, plate material is neither created nor destroyed at these boundaries, but rather plates slide past each other. Transform faults are mainly associated with spreading ridges, as they are usually formed by surface movement due to perpendicular spreading ridges on either side.

Earthquake Location

When an earthquake occurs, one of the first questions is “where was it?”. An earthquake’s location may tell us what fault it was on and where the possible damage most likely occurred. The hypocentre of an earthquake is its location in three dimensions: latitude, longitude, and depth. The hypocentre (literally meaning: ‘below the center’ from the Greek υπÏŒκεντρον), or focus of the earthquake, refers to the point at which the rupture initiates and the first seismic wave is released.

As an earthquake is triggered, the fault is associated with a large area of fault plane.

The point directly above the focus, on the earth’s surface where the origin of an earthquake above ground.

The epicentre is the place on the surface of the earth under which an earthquake rupture originates, often given in degrees of latitude (north-south) and longitude (east-west). The epicentre is vertically above the hypocentre. The distance between the two points is the focal depth. The location of any station or observation can be described relative to the origin of the earthquake in terms of the epicentral or hypocentral distances.

Propagation of seismic waves

Seismic waves are the energy generated by a sudden breaking of rock within the earth or an artificial explosion that travels through the earth and is recorded on seismographs. There are several different kinds of seismic waves, and they all move in different ways. The two most important types of seismic waves are body waves and surface waves. Body waves travel deep within the earth and surface waves travel near the surface of the earth.

Body waves:

There are two types of body waves: P-waves (also pressure waves) and S-waves (also shear waves).

P-waves travel through the Earth as longitudinal waves whose compressions and rarefactions resemble those of a sound wave. The name P-wave comes from the fact that this is the fastest kind of seismic wave and, consequently, it is the first or ‘Primary’ wave to be detected at a seismograph. Speed depends on the kind of rock and its depth; usually they travel at speeds between 1.5 and 8 kilometers per second in the Earth’s crust. P waves are also known as compressional waves, because of the pushing and pulling they do. P waves shake the ground in the direction they are propagating, while S waves shake perpendicularly or transverse to the direction of propagation. The P-wave can move through solids, liquids or gases. Sometimes animals can hear the P-waves of an earthquake

S-waves travel more slowly, usually at 60% to 70% of the speed of P waves. The name S-wave comes from the fact that these slower waves arrive ‘Secondary’ after the P wave at any observation point. S-waves are transverse waves or shear waves, so that particles move in a direction perpendicular to that of wave propagation. Depending in whether this direction is along a vertical or horizontal plane, S-waves are subcategorized into SV and SH-waves, respectively. Because liquids and gases have no resistance to shear and cannot sustain a shear wave, S-waves travel only through solids materials. The Earth’s outer core is believed to be liquid because S-waves disappear at the mantle-core boundary, while P-waves do not.


Surface waves:

The surface waves expand, as the name indicates, near the earth’s surface. The amplitudes of surface waves approximately decrease exponentially with depth. Motion in surface waves is usually larger than in body waves therefore surface waves tend to cause more damage. They are the slowest and by far the most destructive of seismic waves, especially at distances far from the epicenter. Surface waves are divided into Rayleigh waves and Love waves.

Rayleigh waves, also known as “ground roll”, are the result of an incident P and SV plane waves interacting at the free surface and traveling parallel to that surface. Rayleigh waves (or R-waves) took their name from (named for) John Strutt, Lord Rayleigh who first described them in 1885 (ή who mathematically predicted the existence of this kind of wave in 1885) and they are an important kind of surface wave. Most of the shaking felt from an earthquake is due to the R-wave, which can be much larger than the other waves. In Rayleigh waves the particles of soil move vertically in circular or elliptical paths, just like a wave rolls across a lake or an ocean. As Rayleigh wave particle motion is only found in the vertical plane, this means that they most commonly found on the vertical component of seismograms.

The Rayleigh equation is:

Love waves (also named Q waves) are surface seismic waves that cause horizontal shifting of the earth during an earthquake. They move the ground from side to side in a horizontal plane but at right angles to the direction of propagation. Love waves took their name from A.E.H. Love, a British mathematician who worked out the mathematical model for this kind of wave in 1911. Love waves are the result from the interaction with SH-waves. They travel with a slower velocity than P- or S- waves, but faster than Rayleigh waves, their speed relate to the frequency of oscillation.

Earthquake size:

Earthquake measurement is not a simple problem and it is hampered by many factors. The size of an earthquake can be quantified in various ways. The intensity and the magnitude of an earthquake are terms that were developed in an attempt to evaluate the earthquake phenomenon and they are the most commonly used terms to express the severity of an earthquake.

Earthquake intensity:

Intensity is based on the observed effects of ground shaking on people, buildings, and natural features. It varies from place to place within the disturbed region depending on the location of the observer with respect to the earthquake epicenter.

Earthquake magnitude:

The magnitude is the most often cited measure of an earthquake’s size.

The most common method of describing the size of an earthquake is the Richter magnitude scale, ML. This scale is based on the observation that, if the logarithm of the maximum displacement amplitudes which were recorded by seismographs located at various distances from the epicenter are put on the same diagram and this is repeated for several earthquakes with the same epicentre, the resulting curves are parallel to each other.

This means that if one of these earthquakes is taken as the basis, the coordinate difference between that earthquake and every other earthquake, measures the magnitude of the earthquake at the epicentre. Richter defined as zero magnitude earthquake one which is recorded with 1μm amplitude at a distance of 100 km. Therefore, the local magnitude ML of an earthquake is based on the maximum trace amplitude A and can be estimated from the relation:

ML= log A – log A’ (3)

Where A’ is the amplitude of the zero magnitude earthquake (ML=0).

The Richter magnitude scale can only be used when seismographs are within 600 km of the earthquake. For greater distances, other magnitude scales have been defined. The most current scale is the moment magnitude scale MW, which can be used for a wide range of magnitudes and distances.

Two main categories of instruments are used for the quantitative evaluation (estimation, assessment) of the earthquake phenomenon: the seismographs which record the displacement of the ground as a function of time, and the accelerographs (or accelerometers) which record the acceleration of the ground as a function of time, producing accelerograms. X the accelerogram of the 1940 El Centro earthquake.

For every earthquake accelerogram, elastic or linear acceleration response spectrum diagrams can be calculated. (obtained, estimated) The response spectrum of an earthquake is a diagram of the peak values of any of the response parameters (displacement, acceleration or velocity) as a function of the natural vibration period T of the SDOF system, subjected to the same seismic input. All these parameters can be plotted together in one diagram which is called the tripartite plot (also known as “four coordinate paper”).

2.2 Earthquake and Structures simulation

2.2.1 Equation of motion of SDOF system


Vibration is the periodic motion or the oscillation of an elastic body or a medium, whose state of equilibrium has been disturbed. Η μπορω να πω: whose position of equilibrium has been displaced. There are two types of vibrations, free vibration and forced vibration. Vibration can be classified as either free or forced. A structure is said to be in a state of free vibration when it is disturbed from its static equilibrium by given a small displacement or deformation and then released and allowed to vibrate without any external dynamic excitation.

Number of Degrees of Freedom (DOF) is the number of the displacements that are needed to define the displaced position of the masses relative to their original position. Simple structures can be idealised as a system with a lumped mass m supported by a massless structure with stiffness k. It is assumed that the energy is dissipated through a viscous damper with damping coefficient c. Only one displacement variable is required in order to specify the position of the mass in this system, so it is called Singe Degree of Freedom (SDOF) system.

Undamped Free Vibration of SDOF systems

Furthermore, if there is no damping or resistance in the system, there will be no reduction to the amplitude of the oscillation and theoretically the system will vibrate forever. Such a system is called undamped and is represented in the below:

By taking into consideration the inertia force fin and the elastic spring force fs the equation of the motion is given by:

fin + fs = 0 → m+ ku = 0

Considering the initial conditions u(0) and (0), where u(0) is the displacement and (0) is the velocity at the time zero, the equation (4) has the general solution:

u(t) = u(0) cosωnt + sinωnt

where ωn is the natural frequency of the system and is given by,

ωn = (6)

The natural period and the natural frequency can be defined by the above equations:

Tn = (7) fn = (8)

Viscously damped Free Vibration of SDOF systems

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs and the damping force fD, the equation of the motion is given by:

m+ c+ ku = 0 (9)

Dividing by m the above equation gives:

+ 2ξωn+ ω2u = 0 (10)

where ξ is the critical damping and is given by:

ξ = (11)

and Cc is the critical damping ratio given by:

Cc = 2mωn

* If ξ > 1 or c > Cc the system is overdamped. It returns to its equilibrium position without oscillating.

* If ξ = 1 or c = Cc the system is critically damped. It returns to its equilibrium position without oscillating, but at a slower rate.

* If ξ < 1 or c < Cc the system is underdamped. The system oscillates about its equilibrium position with continuously decreasing amplitude.

Taking into account that all the structures can be considered as underdamped systems, as typically their damping ratio ξ is less than 0.10 the equation (9) for the initial conditions u (0) and (0) gives the solution below:

U (t) = e……………[u(0)cosωn+[….+sinωDt] (13)

where ωD is the natural frequency of damped vibration and is given by:

ωD = ωn (14)

Hence the natural period is:

TD = (15)

Undamped Forced Vibration of SDOF system

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs and the external dynamic load f(t), the equation of the motion is given by:

m+ ku = f(t) (16)

where f(t) = f0 sinωt is the maximum value of the force with frequency ω

By imposing the initial conditions u(0) and (0) the equation (16) has a general solution:

u(t) = u(0)cosωnt + sinωnt + sinωt (17)

Damped Forced Vibration of SDOF system

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs, the damping force fD and the external dynamic load f(t), the equation of the motion is given by:

m+ c+ ku = f(t) (18)

where f(t) = f0 sinωt

The particular solution of equation (18) is:

up = Csinωt + Dcosωt (19)

And the complementary solution of equation (18) is:



uc = e…(AcosωDt + Bsinωnt) (20)

2.2.2 Equation of motion of MDOF system

The equation of motion of a MDOF elastic system is expressed by:

M+ C+ Ku = -MAI(t) (21)

where M is the mass matrix, C is the damping matrix, K is the stiffness matrix, u” is the acceleration vector, u’ is the velocity vector and u is the displacement vector. Finally, AI is a vector with all the elements equal to unity and u”g(t) is the ground acceleration.

2.2 Earthquake and Structures simulation

2.2.1 Equation of motion of SDOF system


Vibration is the periodic motion or the oscillation of an elastic body or a medium, whose state of equilibrium has been disturbed. Η μπορω να πω: whose position of equilibrium has been displaced. There are two types of vibrations, free vibration and forced vibration. Vibration can be classified as either free or forced. A structure is said to be in a state of free vibration when it is disturbed from its static equilibrium by given a small displacement or deformation and then released and allowed to vibrate without any external dynamic excitation.

Number of Degrees of Freedom (DOF) is the number of the displacements that are needed to define the displaced position of the masses relative to their original position. Simple structures can be idealised as a system with a lumped mass m supported by a massless structure with stiffness k. It is assumed that the energy is dissipated through a viscous damper with damping coefficient c. Only one displacement variable is required in order to specify the position of the mass in this system, so it is called Singe Degree of Freedom (SDOF) system.

Undamped Free Vibration of SDOF systems

Furthermore, if there is no damping or resistance in the system, there will be no reduction to the amplitude of the oscillation and theoretically the system will vibrate forever. Such a system is called undamped and is represented in the below:

By taking into consideration the inertia force fin and the elastic spring force fs the equation of the motion is given by:

fin + fs = 0 → m+ ku = 0

Considering the initial conditions u(0) and (0), where u(0) is the displacement and (0) is the velocity at the time zero, the equation (4) has the general solution:

u(t) = u(0) cosωnt + sinωnt

where ωn is the natural frequency of the system and is given by,

ωn = (6)

The natural period and the natural frequency can be defined by the above equations:

Tn = (7) fn = (8)

Viscously damped Free Vibration of SDOF systems

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs and the damping force fD, the equation of the motion is given by:

m+ c+ ku = 0 (9)

Dividing by m the above equation gives:

+ 2ξωn+ ω2u = 0 (10)

where ξ is the critical damping and is given by:

ξ = (11)

and Cc is the critical damping ratio given by:

Cc = 2mωn

* If ξ > 1 or c > Cc the system is overdamped. It returns to its equilibrium position without oscillating.

* If ξ = 1 or c = Cc the system is critically damped. It returns to its equilibrium position without oscillating, but at a slower rate.

* If ξ < 1 or c < Cc the system is underdamped. The system oscillates about its equilibrium position with continuously decreasing amplitude.

Taking into account that all the structures can be considered as underdamped systems, as typically their damping ratio ξ is less than 0.10 the equation (9) for the initial conditions u (0) and (0) gives the solution below:

U (t) = e……………[u(0)cosωn+[….+sinωDt] (13)

where ωD is the natural frequency of damped vibration and is given by:

ωD = ωn (14)

Hence the natural period is:

TD = (15)

Undamped Forced Vibration of SDOF system

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs and the external dynamic load f(t), the equation of the motion is given by:

m+ ku = f(t) (16)

where f(t) = f0 sinωt is the maximum value of the force with frequency ω

By imposing the initial conditions u(0) and (0) the equation (16) has a general solution:

u(t) = u(0)cosωnt + sinωnt + sinωt (17)

Damped Forced Vibration of SDOF system

The equation of motion of such a system can be developed from its free body diagram below:

Considering the inertia force fin, the elastic spring force fs, the damping force fD and the external dynamic load f(t), the equation of the motion is given by:

m+ c+ ku = f(t) (18)

where f(t) = f0 sinωt

The particular solution of equation (18) is:

up = Csinωt + Dcosωt (19)

And the complementary solution of equation (18) is:

uc = (AcosωDt + Bsinωnt) (20)

2.2.2 Equation of motion of MDOF system

The equation of motion of a MDOF elastic system is expressed by:

M+ C+ Ku = -MAI(t) (21)

where M is the mass matrix, C is the damping matrix, K is the stiffness matrix, u” is the acceleration vector, u’ is the velocity vector and u is the displacement vector. Finally, AI is a vector with all the elements equal to unity and g(t) is the ground acceleration.

3. Description of the Method

3.1 Simplified Multi-Storey Shear Building Model

It is almost impossible to predict precisely which seismic action a structure will undergo during its life time. Each structure must be designed to resist at any seismic excitation without failing. For this reason each structure is designed to meet the requirements of the design spectrum analysis based in EC8. Also some assumptions are necessary in order to achieve the best and the simplest idealization for each multi store building. Initially it is assumed that the mass of each floor is lumped at the centre of the floor and the columns are massless. The floor beams are completely rigid and incompressible; hence the floor displacement is being transferred equally to all the columns. The columns are flexible in horizontal displacement and rigid in vertical displacement, while they are provided with a fully fixed support from the floors and the ground. The building is assumed to be symmetric about both x and y directions with symmetric column arrangement. The consequence of this is that the centre of the mass of each floor to coincide with the centre of the stiffness of each floor. The position of this centre remains stable up the entire height of the building. Finally, it is assumed that there are no torsional effects for each of the floors.

If all the above assumptions are used the building structure is idealised as a model where the displacement at each floor is described by one degree of freedom. Thus, for a jth storey building, j degrees of freedom required to express the total displacement of the building.

The roof of the building has always to be considered as a floor.

The mass matrix M is a symmetric diagonal nxn matrix for a n-storey building and is given below. Each diagonal value in the matrix represents the total mass of one beam and its two corresponding columns which are assumed to be lumped at each level.

M =

Stiffness method is used to formulate the stiffness matrix. K is the lateral stiffness of each column and is given by the relationship:

K = (22)

where EI is the flexural stiffness of a column.

The lateral stiffness of each column is clamped at the ends and is imposed in a unit sway. The stiffness of each floor is the sum of the lateral force of all columns in the floor. The stiffness matrix is for a n-storey building is:


K =

In order to calculate the natural modes of the vibration, the system is assumed that vibrates freely. Thus, g(t)=0, which for systems without damping (c=0) the equation (21) specializes to:

M+ Ku = 0 (23)

The displacement is assumed to be harmonic in time, this is:

= -ω2Ueiωt (24)

Hence equation (23) becomes:

(K – ω2M)U = 0 (25)

The above equation has the trivial solution u=0. For non trivial solutions, u≠0 the determinant for the left hand size must be zero. That is:

|K – ω2 M| = 0 (26)

This condition leads to a polynomial in terms of ω2 with n roots, where n is the size of matrices and vectors as cited above. These roots are called eigenvalues.

By applying the equation (6) & (7), the natural frequency and the natural period of vibration for each mode shape can be determined.

Each eigenvalue has a relative eigenvector which represent the natural ith mode shape. After the estimation of the eigenvector in order to compare the mode shapes, scale factors are applied to natural modes to standarise their elements associated with various degrees of freedom (X). This process is called normalization. Hence, after the estimation of the eigenvectors each mode is normalised so that the biggest value is X: eigenvector notation. unity.

The eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal. This aspect is expressed by the following expression:

UiTKUij = UiTMUij (27)

The classical eigenvalue problem has the following form:

(M-1K – λ I) u = 0 (28)

where λ =ω2 and I is the identity matrix.

EC8 suggests that the response in two modes i and j can be assumed independent of each other when

Tj ≤ 0.9 Ti

where Ti and Tj are the periods of the modes i and j respectively (always Ti ≥ Tj). The calculated fundamental period can be checked by the equation that EC8 suggests:

T = Ct*H3/4

where T is the fundamental period of the building, Ct is a coefficient and H is the total height of the building; this expression is valid buildings that their total height is not more than forty metres

3.2 Elastic Analysis

The response method is used to estimate the maximum displacement (uj), pseudo- velocity (j) and acceleration (j) for each calculated natural frequency. It is assumed that the MDOF system oscillates in each of its modes independently and displacements, velocities and accelerations can be obtained for each mode separately considering modal responses as SDOF responses. Each maximum, displacement velocity and acceleration read from the design spectrum is multiplying by the participation factor αi to re-evaluate the maximum values expressed ujmax, jmax, jmax respectively. The participation factor αi is defined by the following equation:


where UijT is the transpose vector of each of the mode vectors, M is the mass matrix, AI is the unit vector and Uij is the mode shape vector.

The actual maximum displacements of the jth mode are given by:

u = ujmax·Uj

Afterwards, the root-mean-square (RMS) approximation is used in order to calculate the maximum displacement for each floor. In this approach, all the maximum values for each mode, are squared and summed and their square root is derived. If we let Dmax be the maximum displacement then:

Dmax = (29)

A very variable parameter to characterise the seismic behaviour of a building is the Inter-Storey Drift which can be obtained by the following equation:

δi = Di – Di-1/hi (30)

where Di, Di-1, are the horizontal displacements for two contiguous floors and hi is the corresponding height of the floor. The calculated values must be lower than 4% in order to agree with the Eurocode.

Afterwards the horizontal inertia forces Fj’s applied at each floor are obtained by applying the following equation:

Fj = M·Uj·jmax (31)

where M is the mass matrix, Uj is the eigenvector for each mode and jmax is the maximum acceleration.

As it is suggested from the EC8, the root-mean-approximation is used again in order to obtain the total lateral forces. EC8 suggests that the combined lateral force at each floor is given by the square root of the sum of the squares of each lateral force at each floor of all the modes. If we let Ftotal,i the maximum base shear force then:

Ftotal,j = [1] (32)

where Fij is the lateral force at floor i of the mode j.

Once the total lateral forces and the shear forces have been obtained, the maximum overturning moment is calculated.

3.3 Inelastic Analysis

The inelastic response spectra are generally obtained by the scaling of the elastic design spectra via the use of response modification factors. No effect of the energy absorption was assumed in the structure for the calculated values by using the elastic design spectrum. By introducing the ductility factor this parameter is taking into consideration.

Newmark has described the ductility parameter μ as the ratio of maximum displacement to the displacement at yield. Apparently when yielding does not take place the concept of ductility is not relevant and μ is taken equal to unity. Τhe system is described by the damping ratio ς, the natural frequency ωn, and the ductility factor μ.

In order to calculate the new set of values of acceleration, displacement and velocity the design response spectrum has to be constructed. Newmark’s procedure leads to the construction of two modified spectra.

1. For maximum acceleration:

In this case the elastic design spectrum is reduced by the appropriate coefficients. The acceleration region of the graph is multiplie

Climate Change Mitigation Strategies: UNFCCC and India

From UNFCCC Goals to India


Climate Change has the potential to alter the ability of the earth’s physical and biological systems to provide goods and services essential for sustainable development. Recognition of Climate Change as a significant global environmental challenge has a recent origin. International efforts to address the climate change formally began only a decade ago with the adoption of United Nations Framework Convention to Climate Change (UNFCCC) in 1992.

India is a party to UNFCCC and the government of India attaches great importance to climate change issue.

India is a vast country covering 3.28 million Km2 with diverse surface features and supports 16.2 percent of the global human population. Endowed with varied soils, climate, biodiversity and ecological regimes, under diverse natural conditions and over a billion people speaking different languages, following different religions and living in rural and urban areas, India is an example for a complex yet successful democratic system. Decentralization of powers through local government, to benefit the grass root level is another significant feature of Indian Government. The 73rd and 74th Amendment Acts, 1992, of the Constitution of India have endowed vast powers to local governments at rural and urban levels respectively. India’s commitments to mitigate climate change are reflected in the essence of these two acts and the working and powers given to the local government.

This paper explains and brings to picture how climate change mitigation strategies are filtered in Indian System right from the UNFCCC goals to The Government of India and further to smaller levels of local governments. The paper will explain the hierarchy and working of Indian governance system and highlights the climate change initiatives within this system. The paper will also analyze the constraints and gaps in the institutional setup at local level, which, if rectified, would give more successful results in Climate Change Mitigation Mission of the Government of India.


Over a decade ago most countries joined an international treaty- The United Nations Convention on Climate Change so as to consider the impacts of climate change and to work for adaptation and mitigation initiatives for secure future and sustainable development. The convention, commonly known as the UNFCCC entered into force on 21 March 1994. The ultimate objective of The convention is stabilizing green house gas emissions at a lower level that would prevent dangerous anthropogenic interference with the climate systems.

Under the convention the governments-

  • Gather and share information on greenhouse gas emissions, national policies and best practices
  • Launch national strategies for addressing greenhouse gas emissions and adapting to expected impacts, including the provision of financial and technological support to developing countries.
  • Cooperate in preparing the adaptation to the impacts of climate change.

In 1997, the Kyoto protocol came into being, which shared the convention’s objectives, principles and institutions and also significantly strengthened the convention by committing the parties to individual and specially; “legally binding targets” to limit or reduce climate change. The text of the Kyoto Protocol was adopted unanimously in 1997; and it entered into force on 16 February 2005.

India is signatory to various multilateral environmental agreements, including The Montreal Protocol, The convention on Biological diversity, the United nations Convention to combat desertification ,including the United Nations Framework Convention on Climate Change (UNFCCC) .Government of India attaches great importance to climate change issues. Eradication of poverty, avoiding risks to food production, and sustainable development are three principles embedded in the Convention. At present, information provided in the India’s Initial National Communication to the UNFCCC is in terms of guidelines prescribed for Parties not included in Annex I to the UNFCCC and the inventory is prepared for the base year 1994.

India is a vast country. It covers 3.28 million km2 of area having diverse surface features. Also, it occupies only 2.4 percent of the world’s geographical area, but supports 16.2 per cent of the global human population. The country is endowed with varied soils, climate, biodiversity and ecological regimes. “Under such diverse natural conditions, over a billion people speaking different languages, following different religions and living in rural and urban areas, live in harmony under a democratic system”( India NATCOM,2004).

Climate Change Negotiations

Global warming issue became a part of the international agenda in 1988. The climate issue, initiated by the small island nation Malta, came up at the UN General Assembly in December 1988, as part of a discussion on ‘the common heritage of mankind’. The resolution set up a preparatory committee to work towards an international agreement. The concern for global warming particularly by the industrialized countries geared up since then and ‘climate politics’ came into being and were refined with a series of international conferences and formal negotiations that followed. The momentum culminated in the signing of a Framework Convention on Climate Change (FCCC) and opened for signatures at the Rio Earth Summit in June, 1992. The FCCC aims at stabilization of greenhouse gas (GHG) concentrations, in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system. Subsequently, the parties to the FCCC adopted the Kyoto Protocol in December, 1997. However, the developing nations see the Protocol as burdened with loopholes because of the fact that it emphasizes on the economic concerns, rather than ecological or social justice. The main area of dispute between the developed countries and the developing countries lies in the sectors pertaining to equity and sustainability. However, the operational details of the Kyoto Protocol have now been finalised after intensive deliberations at Marrakech, on November 10, 2001, which was participated by 171 countries .

The protocol has been guided by Article 3.0 of the FCCC, and marks the first global attempt to place legally binding limits on greenhouse gas emissions from developed countries. The Protocol calls for 5.2% reduction from their 1990 level of GHG emissions by the developed countries during the period 2008-2012. It also specifies the amount each country must contribute toward meeting the reduction goal. Nations with the highest CO2 emissions like the United States, Japan and most European nations are expected to reduce emissions by a range of 6 to 8 per cent. By 2005, all industrialized nations that ratify the accord must also show ‘demonstrable progress’ toward fulfilling their respective commitments under the Protocol.

Some issues that add to the complexity of the Kyoto Protocol:

Considerations for baseline and its effects – The target of 5.2% reduction beyond 1990 level in the commitment period 2008-2012, were dependent on 1990 emissions. This meant that if a country which had high emissions in 1990 and had reduced them between 1990 and thereafter, then it could actually increase its emission once again, or only stabilize these, and not carry out any reductions.

As an example one can analyse the case of Australia. In 1990, as much as 30 percent of the emissions were from deforestation, which eventually became a blessing for the country – for, instead of penalizing for creating the problem in the first place, Australia has been able to use its emission to its advantage, by winning the right to count any improvement from its 1990 level as its national credit. And as its deforestation rate has been controlled, it actually can increase its emission above and beyond the figure of 8 percent it is expected to reduce. On the other hand, USA and Japan were lobbying hard to change the date of baseline from 1990 to 1995. The reason for this lies in the fact that both the countries have made a significant increase in carbon emissions between 1990 and 1995.

Flexible mechanisms – The Kyoto Protocol includes three mechanisms –

Art.6 (Joint Implementation),

Art.12 (Clean Development Mechanism) and

Art.17 (Emissions Trading),

These mechanisms are meant to pave an explicit way for developed countries to meet their Kyoto targets easily. The cheapest and the most attractive option for meeting the emission targets of the North (i.e developed countries-Annexe I) being the Clean Development Mechanism (CDM) that will be operated on the project basis invested in the South (i.e under developed countries). This implies that, as global warming is bound to be unsolved even by the end of this century, the South would have to pay a heavy price in future once they have reached a high level of energy efficiency through means like CDM. For by then the cost of carbon cutting will be very high even for the developing countries, which would eventually have to do the carbon cutting on their own. The next issue comes on the question of energy-efficient technology, which the North wishes to push to the South through CDM. As technology up-gradation is a continuous process, hence what is the most efficient technology at the time of implementation of the CDM project, may be obsolete within few years that follows.

Principle of equity: the Kyoto Protocol does not define the rights and responsibilities of all nations within a reasonable frame. So long as the world remains within a carbon based energy economy, equitable sharing of the ‘atmosphere’ shall remain a critical issue, especially for poor developing countries who need a maximum space for their future economic growth.

The Kyoto reduction, by itself, is inadequate to achieve a stabilization of climate change by 2100. A continual and larger reduction, similar to that stipulated in the Kyoto Protocol for the 2008-2012 period, will be needed in the future in order to begin to stabilize long-term greenhouse gas emissions. Even if stabilization of greenhouse gases is achieved, global warming will still continue for several decades and sea levels will continue to rise for several centuries. This is because Even if the emissions from the developed countries were reduced to zero in the near future, the current trends of growing emissions from developing countries alone could force the atmospheric concentration to exceed stabilization levels of 550 ppm

( Parivesh, CPCB,2006). Thus, participation of all countries, including the developing countries such as India, is essential for a successful worldwide effort to arrest the growth of greenhouse gas emissions.

India and Climate Change- The Threats and Vulnerability

Climate Change is a major global environmental problem and an important issue because of diverse impacts not only ecological, but economic, social, political and physical in nature and content. It is a matter of great concern especially for developing countries like India who have limited capacity to develop and adopt strategies to reduce their vulnerability to changes in climate. Global, national and local level measures are need of the hour to combat the adverse impacts of climate change induced damages.

“India being a developing country has low capacity to withstand the adverse impacts of climate change due to high dependence of majority of population on climate sensitive sectors as the agriculture, forestry and fisheries”,( Shukla,, 2003). This is coupled with poor infrastructure facilities, weak institutional mechanisms and lack of financial resources. This is the reason why we are seriously concerned with the possible impacts of climate change. The possible impacts of climate change are mentioned below:

  • Water stress and reduction in the availability of fresh water due to potential decline in rainfall.
  • Threats to agriculture and food security, since agriculture is monsoon dependent and rain dependent agriculture dominates in many states.
  • Shifts in area and boundary of different forest types and threats to biodiversity with adverse implications for forest-dependent communities.
  • Adverse impact on natural ecosystems, such as wetlands, mangroves, grasslands and mountain ecosystems.
  • Adverse impact of sea-level rise on coastal agriculture and settlements.
  • Impact on human health due to the increase in vector and water-borne diseases, such as malaria.
  • Increased energy requirements and impact on climate-sensitive industry and infrastructure.

One of the various reasons for vulnerability of India depends on its typical and diverse climatic conditions. India is subject to a wide range of variation in climatic conditions from the freezing Himalayan winters in the north to the tropical climate of the southern peninsula, from the damp, rainy climate in the north-east to the arid Great Indian Desert in the north-west, and from the marine climates of its vast coastline and islands to the dry continental climate in the interior. The Indian summer monsoon is the most important feature in dictating meteorology of the Indian subcontinent and, hence, its economy. Almost all regions of the country receive entire annual rainfall during the summer monsoon (also called the SW monsoon), while some parts of the south-eastern states also receive rainfall during early winter from the north-east monsoon. Therefore, India could be more at risks than many other countries from changes in temperature and sea level.

Models predict an average increase in temperature in India from 2.3 to 4.8 °C for the bench mark doubling of carbon dioxide scenario (Lonergan, World Bank Technical Paper No.402, 1998). Temperatures would rise more in Northern India than in Southern India. In the North Indian Ocean, under a doubling, the average number of tropical disturbance days could increase from 17 to 29 a year (Haarsma Climate Dynamics, Vol.8, 1993); while, without protection, approximately 7 million people would be displaced, and 5,760 Km2 of land and 4,200 Km of road would be lost (Asthana, JNU, New Delhi, 1993). Further, in the Indian context, climate change could represent an additional stress on the ecological and socioeconomic system that are already facing tremendous pressure due to rapid urbanization, industrialization and economic development.

Options for Mitigation

The ability to adapt to climate change depends on the level of income and technology, as well as the capacity of the system of governance and existing institutions to cope with change. The ability to mitigate GHG emissions depends on industrial structure (the mix of industrial activities), social structure (including, e.g., the distance people must travel to work or to engage in recreational activities), the nature of governance (especially the effectiveness of government policy), and the availability and cost of alternatives. In short, what is feasible at the national level depends significantly on what can be done at the subnational, local, and various sectoral levels”(Climate Change 2001: Working group III: Mitigation; IPCC,2001).The challenges of climate change mitigation involve diverse issues – economic, political, social and environmental. Governance is one of the prime issues in mitigation of climate change impacts. A structured governance system is the only tool through which any policy framework or initiative can be achieved. The importance or role of governance in mitigation thus can be described through its three pillars:

  • Organizational Structure- Through governance the qualities of organization participation, transparency and accountability can be achieved in the mitigation exercise at all levels..
  • Financial Mobilization- This involves ensuring financial commitment globally, at national levels and also at local levels of the government
  • Legal Framework- It ensures empowerment, enforcement and compliance of mitigative strategies and supporting environmental laws.

As the National GHG inventory for India shows, the major increase in GHG emissions over the next 20 years would be related to energy consumption. As India has abundant coal deposits, it is beyond doubt that coal will be the dominant source of energy. Therefore, energy efficiency measures in this sector remain our prime concern. Power generation in India is expected to reach a peak demand of 176 GW by 2012, and the total energy requirement will be 1058 billion units (Parivesh, Central Pollution Control Board, 2006).

This is why; increasing the use of renewable energy and energy efficiency in the form of low carbon options are the two main measures that can greatly reduce GHG emissions. We will now simultaneously specify what scientific mitigation tools for climate change are available for various sectors and the corresponding governance measures to actually target the process of mitigation.

The energy sector:

  • Fiscal incentives and taxes, voluntary emission reductions, green rating, and capacity building etc. Another area of importance is the transmission and distribution losses, which is energy loss.
  • There is considerable scope of reducing losses, meant to translate into a large mitigation potential.
  • Two major categories of Barriers hinder adoption of electricity conservation and demand management in India.

a) Macro-level barrier – At the level of governance system; either policy induced or due to lack of

appropriate policies and;

b) Micro-level barriers – related to the consumers and the economic environment they face.This

can be equated to lack of awareness about possible alternatives on the

part of the consumers and lack of awareness drives on the part of the


The forestry sector:

IPCC Second Assessment Report categorizes three broad options for abatement viz.

  • Conservation management: This strategy attempts to conserve the existing carbon storage capacity of forests by halting or slowing down forests deforestation and forests degradation.
  • Storage management : This strategy attempts to increase carbon strategy in woody vegetation and soil in existing degraded forests, as well as to create new carbon sinks in areas where forests do not exists or have been cleared. These may be achieved by promoting natural regeneration, reforestation on deforested lands, aforrestation of non-forest lands and agro-forestry on crop and pastureland.
  • Substitution management: This strategy attempts involves the replacement of fossil fuels by renewable fuel wood or other biomass products.

Here, governance plays an important role based on it a capacity to generate and bring about changes in the management of forests and augmentation of use of renewable products.

The agriculture sector:

Methane emissions from rice cultivation remain the major contributor of GHG emissions. Other sources being enteric fermentation, manure management, agricultural soils etc. Abatement strategy in this sector in India can be achieved given the scientific expertise available in India, but require gearing up by proper governmental intervention at the level of ministry of agriculture, as far as policy initiatives are concern, and through local governments for implementation and monitoring.

The industrial sector:

As the national inventory of GHG shows, major contribution came from energy intensive sectors like iron & steel, fertilizer, cement, aluminium, paper & pulp etc. A few option available for energy efficient options in power, industrial and domestic sector are given as follows:

Source: Teri, New Delhi.

These can be supported by further subsidizing use of energy efficient options and where required made mandatory by the government under the periphery of the existing environmental laws of the country.

Mitigation through sinks:

Carbon dioxide is removed from the atmosphere by a number of processes that operate on different time scales, and is subsequently transferred to reservoirs or sinks. The Kyoto Protocol through its Ariticle 3.3 allows afforestation as a sink to reduce carbon dioxide levels in the atmosphere. Further, Article 3.4 of the Kyoto Protocol states that additional human induced activities in the agricultural soils and LULUCF categories may be added to the three mechanisms (Joint implementation, Clean Development mechanism and Emission trading) subject to certain conditions.

In India, forestry is dominated by government based institutions. These institutions need new insight so that they can effectively incorporate mitigation policies and measures in their resource management activities. According to the central Pollution Control Board, India has been persistently implementing one of the largest reforestation programs in the tropics with over one million hectares planted annually. Nearly half of this reforestation is on degraded forests and village common land. It is estimated that the carbon uptake in forests, degraded forests, and plantations is estimated to offset the gross carbon emissions from the forests sector. Carbon dioxide emissions in India are projected to increase from no-net emissions in 1990 to 77 million tonnes by 2020( Parivesh, CPCB,2006).

Barriers to mitigation:

Greenhouse gas mitigation measures are compounded by several barriers inherent to the process of development. In India, inequitable distribution of income and wealth forms a core feature of barriers to effective implementation of any type of intervention in India, leave apart climate change. Available instruments to limit domestic GHG emissions can be categorized into market based instruments, regulatory instruments, and voluntary agreements. For the developing countries, however, domestic structural reforms and policies on trade liberalization and liberalization of energy markets act as barriers to GHG reduction. These policies coupled with macroeconomics, market oriented reforms, set the framework in which more specific climate policies would be implemented. The IPCC Special Report on technology Transfer (IPCC, 2000) identifies various important barriers that could impede environmental technology transfer, such as:

  • lack of data, information, and knowledge, especially on emerging technologies;
  • inadequate vision about the understanding of local needs and demands;
  • high transaction costs and poor macro economic conditions;
  • insufficient human and institutional capabilities;
  • inappropriate technology adopted and
  • Poor legal institutions and framework.

These hold good for the overall barriers of mitigation in Indian Context also. In terms of governance and its intervention, technology transfer can be traded off with some of our own indigenous technologies. This will ensure equitable exchange and also promote indigenous Indian Science.

National Policy for Climate Change Mitigation

We, as present generation have inherited this environment and atmosphere from our ancestors. Further the consequences of climate change will be faced by our children in the future. And so it can be said that climate change is an inherently different and irreversible problem as compared to other environmental problems. Also, the assumption that prior experience of problems like air pollution has failed at many levels as a good model upon which policy decisions on climate can be based. Options to mitigate climate change include actual emission reductions carbon dioxide sequestration and investments in developing technologies that will make future reductions affordable and easily available since cheap relative to their current costs. Since the inception of UNFCC in 1992, the Govt. of India has been an active participant in the climate charge negotiations. India being a party to the UNFCC was the 38th country to ratify it on November 01, 1993. The Ministry of Environment & Forests is the nodal Ministry for all environment related activities in the country and is the nodal Ministry for co-coordinating the climate charge policy as well. The working group on the FCCC was constituted to oversee the implementation of obligations under the FCCC and to act as a consultative mechanism in the Govt. for impacts to policy formulation on climate change. To enlarge the feedback mechanism the Govt. of India has constituted an Advisory group on climate charge under the chairmanship of the Minister of Environment & Forests.

Development of National Guidelines & Policy Options for reducing GHG Emissions

The national guidelines or framework for monitoring GHG emissions and policy options for reducing GHG should emphasize not only on issues associated with climate change but also include the following:

  • Emission Forecasting
  • Setting goals
  • Policy criteria
  • Policy evaluation
  • Organizational and political issues

Climate change and GHG emission and sequestration may include many sectors of society and extend far into the future. Furthermore, policy measures to address GHGs overlap with many other public policy objectives, however in a complimentary way. Policy formulations involve:

  • Understanding the issues at hand,
  • Having a broad vision of the range of actions that governments can take to address those issues,
  • Selecting from within this the approaches that offer the most potential far achieving multiple public goals.

More importantly, the policy formulation process must respond to local circumstances and must address institutional, fiscal, political, and other constraints. The Govt. of India has nevertheless addressed a large number of local and regional environmental issues in its developmental strategy that are complementary to the climate change issue.

Institutional Arrangements So Far For Climate Change Related Strategies

In Area of Research

The Ministry of Environment and Forests (MoEF), Ministry of Science and Technology (MST), Ministry of Agriculture (MoA), Ministry of Water Resources (MWR), Ministry of Human Resource Development (MHRD), Ministry of Non Conventional Energy (MNES), Ministry of Defence (MoD), Ministry of Health and Family welfare (MoHFW), are the main ministries of the Government of India which promote and undertake climate and climate change-related research in the country. The Indian Space Research Organization (ISRO) is also am important agency involved in working of this area and is under the direct governance of the Prime Minister.It supports all the above agencies with satellite-based passive remote sensing. The MoEF, MST, MHRD and MOA operate under the umbrella of many premier national research laboratories and universities. The most prominent being the 40 laboratories of the Council of Scientific and Industrial Research (CSIR), an autonomous body under the MST; and the vast network of the Indian Council of Agricultural Research (ICAR) under the MOA. The CSIR is the national R&D organization which provides scientific and industrial research for India’s economic growth and human welfare. It has a countrywide network of 40 laboratories and 80 field centers. The ICAR network includes institutes, bureaus, national research centers, The Department of Science and Technology (DST) under the MST coordinates advanced climatic and weather research and data collection over the Indian landmass. There are three premier institutions under DST that are solely dedicated to atmospheric science viz. the IMD, the National Centre for Medium Range Weather Forecast (NCMRWF) and the Indian Institute of Tropical Meteorology (IITM).

Apart from the Indian initiatives, climate change research promoted by international organizations like the World Climate Research Program (WCRP), International Geosphere Biosphere Programme (IGBP), International Human Dimension Program (IHDP) and DIVERSITAS are being strongly supported by various Indian agencies like Indian Climate Research Program (ICRP) under DST, National Committee- International Geosphere Biosphere Programme (NC-IGBP) constituted by Indian National Science Academy (INSA) and Geosphere-Biosphere Program (GBP) of ISRO. Agencies like CSIR, also provides infra-structural and financial support to carry out research in the area of global change

In Area of Development

The single most important feature of our post-colonial experience is that the people of India have conclusively demonstrated their ability to forge a united nation despite its diversity, and to pursue development within the framework of a functioning, vibrant and pluralistic democracy. In this process, the democratic institutions have put down firm roots, which continue to gain strength and spread. A planned approach to development has been the central process of the Indian democracy, as reflected in the national five-year plans, state plans,departmental annual plans, and perspective plans of various ministries of the central and state governments. For the last five and a half decades, the guiding objectives of the Indian planning process have been sustained economic growth, poverty alleviation, food, health, education and shelter for all, containing population growth, employment generation, self-reliance, people’s participation in planning and programme implementation, and infrastructure development.

The National Conservation Strategy and Policy Statement on Environment and Development, 1992, provides the basis for the integration of environmental considerations in the policies of various sectors. It aims at the achievement of sustainable lifestyles and the proper management and conservation of resources.

The Policy Statement for Abatement of Pollution, 1992, stresses the prevention of pollution at the source, based on the ‘polluter pays’ principle. It encourages the use of the most appropriate technical solutions, particularly for the protection of heavily polluted areas and river stretches. The Forest Policy, 1988, highlights environmental protection through preservation and restoration of the ecological balance. The policy seeks to substantially increase the forest cover in the country through afforestation programmes. This environmental framework aims to take cognizance of the longer-term environmental perspective related to industrialization, power generation, transportation, mining, agriculture, irrigation and other such economic activities, as well as to address parallel concerns related to public health and safety.

The statutory framework for the environment includes the Indian Forest Act, 1927, the Water (Prevention and Control of Pollution) Act, 1974, the Air (Prevention and Control of Pollution) Act, 1981, The Forest (Conservation) Act, 1980, and the Environment (Protection) Act, 1986. Other enactments include the Public Liability Insurance Act, 1991, the National Environment Tribunal Act, 1995, and the National Environment Appellate Authority Act, 1997. The courts have also elaborated on the concepts relating to sustainable development, and the ‘polluter pays’ and ‘precautionary’ principles. In India, matters of public interest, particularly pertaining to the environment, are articulated effectively through a vigilant media, an active NGO community, and very importantly, through the judicial process which has recognized the citizen’s right to a clean environment as a component of the right to li

Economic Impact of Climate Change on Water Resources

Economic Impacts of Climate Change in the Mountain Regions: Water as a source of peace and economic development


When we think of the mountains we usually think of the mountains themselves and not the impact they make on the area below them. The purpose of this research is to review impacts of climate change at a global scale on the mountains and the mountains’ water supply. This paper also reviews major environmental/ecological, social, and economic issues facing us. Tourism industry will suffer because of it.

The study concludes that climate change will bring in instability on global scale with possible water conflicts and decreasing economic developments especially in developing countries. This will bring increased people migration into the areas not much effected by water supply issues and will increase social and political instability in those areas.

Keywords:relative water yield (RWY), “water towers”, “river piracy”


It is estimated that out of 7.382 billion people in the world today (U.S. Census, 2017), about 11% of the world’s population live in the mountain regions (Kohler et al, 2014). The mountains provide water for billions of people. The mountains are “water towers” of the world. They cover 25% of the world’s land surface and more than 50% the world’s population depends on water that originates in the mountains (Viviroli et all, 2006).

By comparison in 2015 we used less than 1% of fresh water. That water is made in more than 18,000 desalination plants. The water desalination production increased by 67% from 2008 (Thomas Sumner et all, 2016). The water from mountains is used for drinking, domestic use, irrigation, hydropower, transportation, tourism purposes and many other industries. Climate change in the mountains is bringing in an unpredictable winters. Winters with minimal snow or snow coverage which lasts for only a short time are the winters of today (J. Dawson et all, 2009).

Climate change in the mountains will bring increased hazards and casualties, such as: fires, floods, avalanches, landslides, desertification, and mountain erosion. It will change the rainfall and monsoon patterns which will bring devastation and economic uncertainty to many regions. Climate change will increase people’s migration and will bring diseases not known in the area. The possibility of conflicts and even war might also increase.

Water as a source of peace and economic development

Mountains as water source around the world

Climate change might have devastated outcome especially for semi-arid and arid areas which will be affected by less water coming from the mountains. The mountains help to distribute up 95% of water to these areas. In humid areas mountains’ distribution of water is up to 60%. (Swiss Agency et all, 1998). Figure 1 below shows mountain water run off around the world.

Figure 1: Disproportionality of mountain runoff formation relative to average lowland runoff (RWY), mapped cell by cell for mountainous areas. Disproportionality in favor of runoff is given when RWY is greater than 1, its importance being marked for RWY > 2 and essential for RWY > 5 (Viviroli et al, 2007).

As can be seen in Figure 1, the most important water mountain sources are regions in the Middle East, South and central Africa, Asia, Rocky Mountains in the U.S. and the Andes.

In the Figure 1, we can see that relative water runoff is very important for the lower areas in the areas of where RWY is higher than 1<. It is very important where RWY is higher than 2< (2light red). The water runoff is essential where RWY is higher than 5< (2 dark reds).

Himalayan Maintains alone supply water to over 2 billion people in China, India, Pakistan, Bangladesh, Burma, Laos, Thailand, Vietnam and Cambodia (VOA, 2015).

Mountains in Africa are as important as Himalayan Mountains in Asia in that they provide the water source for farming for millions of people and include many African ecosystems such as: forests, grasslands, drylands, rivers and wetlands. Without these mountains Africa would not be Africa we know (Mountain Partnership et all).

The Nile River is very important water source in Northern Africa. Its waters is the primary water source of Egypt and Sudan. It is the longest river in the world. The Rwenzori Mountains and the Ethiopian Highlands are the most permanent source of the tributaries of River Nile (Unesco, 2017). About 250 million people in 11 countries depend on the waters from the Nile (Salman et all, 2016). Former UN Secretary General said once that, “The next war in our region will be over the waters of the Nile, not over politics…”

Boutros Boutros-Ghali, former foreign minister of Egypt and former UN Secretary General. (Quoted in: International Fresh Water Resources, 1997)

Figure 2: The map shows all the land areas which are connected and supply water by mountains in Africa (Mountain Partnership et all).

Outcomes and evaluation of disappearing mountains’ sourced water 

About 5,500 glaciers in the Himalayan region could disappear or reduce their water content volume by 70%-99% by 2100. That would be devastating for the people using the water resources in the area (J. M. Shea et al, 2015).

The study done by Dr Joseph Shea estimates changes in ice volume in Himalayan Mountains based on two emissions scenarios. These two emission scenarios are approved by the Intergovernmental Panel on Climate Change (IPCC). RCP 4.5, shows emissions stabilized by 2050s, and RCP 8.5, is the scenario where the highest emissions adapted by IPCC are used. In the Figure 2 blue lines (RCP 4.5) and red lines (RCP 8.5).

Using the highest emission scenario shows us decrease in ice volume by almost 100%. The model in Figure 3 shows possible complete melt down of all glaciers by 2100. This scenario might be duplicated in other mountain regions with devastated results for farming and other industries which need water or snow to survive.

Figure 3: Projected loss of glacier volume through the 21st century for RCP4.5 (blue lines) and RCP8.5 (red lines) emissions scenarios. Thin lines show individual model results and bold lines show average across all models. Source: Shea et al. (2015).

In Tibet people can already see big changes. One Tibetan coming back after 30 years to Lhasa said, “When I lived in Lhasa, it was very rare that people could walk outside in T-shirts,” and “Now people are walking in shorts!” (Dorje et all, 2015).

This water source is now supplying water to 2 billion people and by 2050 that might increase to 2.7 billion. A study conducted in 2014 shows that most rivers’ water flow will increase till 2050. This is due to melting glaciers but then the water flow is projected to decrees. It is very important for all the researchers and the governments to work on water policies to make sure that there will be enough water sources for the growing population in the future (Arthur Lutz et all, 2014).

South Asia, especially Pakistan, India, Bangladesh, India and Nepal might get some help from the monsoons. Studies done on subject show multiple scenarios. Some studies predict that climate change might alter the direction of the monsoons. The rain might fall over the sea. Some show that the monsoons will increase the amount of rain and start unimaginable floods. Other studies predict less rain. Whatever it will be, things will change for the local population for the worse.

In South America climate change is changing Andes mountains environment. Mountain ecosystems there, known as “páramos” help to provide clean water and protect the lowland against flooding. These systems are located at 11,000 feet or more above sea level. These ecosystem have plants called espletia. These plants can hold a lot of water, several times their weight (Autumn Spanne et all, 2012). The climate change increased the temperature and the moisture in the mountains and now ecosystem can’t handle it anymore. On March 31, 2017, flooding and mudslides from the mountains killed 254 people in the city of Mocoa. City of Mocoa is located at elevation of 2002.08 feet (Jaime Saldarriaga et all, 2017).

Glaciers in the mountain regions

Most of the glaciers are melting. We can see that on the photos in the Figure 4 and Figure 5 (Burkhart, P et all, 2016).

Figure 4: Columbia Glacier, Alaska, has retreated by 6.5 km (4 miles) between 2009 (left) and 2015 (right) (Credit: James Balog and the Extreme Ice Survey)
Figure 5: Stein Glacier, Switzerland, has retreated by 550 m (1,800 ft) between 2006 (left) and 2015 (right) (Credit: James Balog and the Extreme Ice Survey)

The glaciers; melting not only increase the sea level rise but also millions of people are supplied with water from it. These photos show clearly how things can quickly change (Burkhart, P et all, 2016). The loss of glaciers is not only loss of water source but it is a loss of environmental archives. Scientists use glacier’s to study snow which has accumulated into layers in the glacier (Burkhart, P et all, 2016).

The first “river piracy” case was in 2016. River piracy is when river changes the course. It is diverted from one river bed to another. The study in 2016 shows that river has change its course from northward to southward (Shugar. D et all, 2016). Figure 6 shows that change. It is because of melting and retreating glacier.

Figure 6: Guardian graphic | Source: Nature Geoscience

Tourism – Ecotourism

U.S. tourism brings $12.2 billion during winter. It also employs 211,900 people during the winter season. National Ski Areas Association stated that in 2009-10 ski season, 88 % of resorts had to use artificial snow to stay open (Burakowski et all, 2012).

It is very expensive to make snow. East coast ski areas spend anywhere from $500,000 to over $3.5 million making snow every season. The expense goes up every year (Flynn et all, 2013). A recent study on Northeastern U.S. ski resorts estimated that only 4 out of 14 major ski resorts would remain profitable by 2100. That is only 29 % of the resorts would be open (Burakowski et all, 2012).

Climate change shortens the winter and snow fall in the mountains. Winter starts later in the year and ends sooner (Dawson et all, 2009). The ski areas must come up with the summer activities in order to be able to support the economy of these areas.

Climate change and the hazards in the mountain region

It is estimated that climate change in the mountain will not only disrupt water supply but will also increase natural hazards in and around the mountains. As can be seen in Figure 7, all mountain regions will be affected.

Figure 7: Climate change and the incidents of hazards in the mountain regions

The changes in the mountain areas are seldom thought as an increase in hazards and casualties. But mountain fires, floods, avalanches and landslides will increase. It will change the rainfall and monsoon patterns which will bring devastation and economic uncertainty to many region


There has to be cooperative water agreements between countries. Water quantity and quality needs to be discussed and put into policies of all interested parties. All water issues and water awareness must be spread within a local population with water efficiency included. Environment and environmental issues must be taken into consideration when policies or projects are implemented.

It is accepted as a fact that more fresh water will be needed especially in the regions where the mountains are providing the water. Planning for future water supply for farming and for human use is needed in order to avoid social and political instability in those areas.

Literature Cited

Autumn Spanne, South American Cities Face Flood Risk Due to Andes Meltdown, 12/03/2012, Scientific American

Burakowski, E and Magnusson, M, Climate Impacts on the Winter Tourism Economy in the United States, 12/2012, Natural Resources Defense Council (NRDC)

Burkhart P, Alley R, Thompson L, Balog J, Baldauf P, Baker G, Savor the Cryosphere, 12/13/2016, The Geological Society of America, Inc.

J. Dawson, Climate change analogue analysis of ski tourism in the northeastern USA, 04/28/2009, University of Waterloo

Dorje, Y “Researchers: Tibetan Glacial Melt Threatens Billions”, The Voice of America (VOA), 01/28/2015

Casey Flynn, Cost of snowmaking, 01/05/2013, ESPN

Fust W,”Mountains of the World: Water Towers for the 21st Century” Swiss Agency for Development and Cooperation, 1998

Kohler, T., Wehrli, A. & Jurek, M., eds. 2014. Mountains and climate change: A global concern. Sustainable Mountain Development Series. Bern, Switzerland, Centre for Development and Environment (CDE), Swiss Agency for Development and Cooperation (SDC) and Geographica Bernensia. 136 pp.

Kohler T. and Maselli D. (eds) 2009. Mountains and Climate Change – From Understanding to Action. Published by Geographica Bernensia with the support of the Swiss Agency for Development and Cooperation (SDC), and an international team of contributors. Bern.

A. F. Lutz, “Consistent increase in High Asia’s runoff due to increasing glacier melt and precipitation”, Nature Climate Change, 06/01/2014

Mountain Partnership, “Mountains as the water towers of the world: A call for action on the sustainable development goals (SDGS)”,

Mountain Partnership, African Mountains: Water Towers in need of Attention

Jaime Saldarriaga, Rescuers, locals dig for Colombia flood victims, 254 die, 04/01/2017, WORLD NEWS

SALMAN M.A. SALMAN, Water Security in the Nile Basin, 03/12/2016, Fair Observer

J. M. Shea, Modelling glacier change in the Everest region, Nepal Himalaya, The Cryosphere, 05/27/2015

Schewe, J and Levermann, A, A statistically predictive model for future monsoon failure in India, 11/05/2012, IOP Publishing Ltd;jsessionid=4D5F4C96ECA63FA87AA1FDFF39D69994.ip-10-40-1-105

Thomas Sumner, New desalination tech could help quench global thirst, 09/12/2016, Science News

Unesco, Rwenzori Mountains National Park, 2017

U.S. Census Bureau, “U.S. and World Population Clock”, 03/2017

Viviroli, D., H. H. Du¨rr, B. Messerli, M. Meybeck, and R. Weingartner (2007), Mountains of the world, water towers for humanity: Typology, mapping, and global significance, Water Resour. Res., 43, W07447, doi:10.1029/2006WR005653.