Categories for Engineering

Career Goals Essay

Career Goals Essay

Have you ever had a goal in life you just had to reach? Well my goals in life are to join the navy and become a civil engineer. I realize that there are numerous ways to become a civil engineer, but I chose to join the navy because I feel it would be most beneficial to me. My first career goal after high school is to join the navy. To do this I will need to meet with a recruiter. Also I will need to take the asvab, after taking the asvab I will also need to take a series of fitness tests.

Once in the navy I will need to attend officer candidate school, also I will need to attend civil engineering corps Officer School. After I have completed this I will also need to acquire a four year degree in civil engineering. Some of the reasons I want to join the navy. One reason I want to join the navy is because while in the navy they will send me to collage at no cost to me.

Another reason is because if I enlist long enough I will be eligible to retire. Also I want to join the navy so I can travel the world. In order to become a civil engineer I chose to join the navy.

I chose to join the navy over just going to collage because I believe that the navy has more to offer me. Some added benefits of joining the navy is they will provide me with a job. Also some benefits are I will get paid while I’m in school, also they will provide me a place to stay. Some added requirements of being in the navy are that I have to attend officer candidate school and also I must attend civil engineering corps officer school. A specific that I need to become a civil engineer is I need a four year degree in civil engineering.

My two top choices of collages to get this degree at are either the University of Washington, or the University of Arizona. Different paths I could of chose are getting a degree in technical engineering, or electrical engineering. I chose civil engineering because I feel it will most prepare me for the career I have chosen. My career goals in life are to join the navy and become a civil engineer. The reason I chose this is I feel that the navy offers me more of an opportunity to travel and also to get hands on experience. Another reason I chose the navy is they will pay for my schooling.

You may also be interested in the following: academic and career goals essay

Mining Engineer Essay

Mining Engineer Essay

1. Introduction

Concerning the development of field of study, this paper will elaborate several issues regarding the mining engineer. They include the background of mining engineering, the requirements needed to be mining engineer, earnings for mining engineer professional and many others.

2. Education/Training

Mining engineering like other engineering degree requires the mastery of several subjects such as mathematics including trigonometry, geometry calculus, and algebra; general science (physics, chemistry, and biology), and also social and humanities studies, information technology, and some courses in English since jobs in mining usually involve many people from different cultural background and languages (Daub, 2006).

The course to obtain bachelor degree in mining engineering usually takes about 4-5 years. While the first two years, students learn about mathematics and genera science, the specialization of mining engineering occurs in the last two years in which students learn about geology, mine management, physical mineralogy and petrology, and explosive engineering.

Some universities that provide mining engineering include University of Arizona, University of Utah, and West Virginia University in the U.

S (Daub, 2006) and University of Exeter and in the U.K. in the University of Exeter (2007), the undergraduate students will have extensive course in mathematic and physics in order to arm the students with problem solving capability.

3. Job Skills, Talents, and Experience

Mining engineering not only requires technical expertise but also physical fitness since jobs in mining engineering involves hiking, working in variety of condition such as daylight, rainy, windy etc (Daub, 2006). Particular jobs in mining engineering involves open-pit or underground mines, construction supervisory, safety issues, equipments operations and maintenance, information processing, to name a few (Sloan Career Cornerstone Center, 2007). Table 1 shows the comparison of several mining engineers in terms of required skills and talents in which each has different required capabilities.

Table 1 Comparison of Skills and Talents between Mining Engineers
No. Type of Engineers Required Skills/Talents
1 Blasting Engineer  Develop blasting schedule and technique to intensify long-term goals in production
2 Sr. Mining Engineer  Capable of developing and applying economic models to geological information system
3 Mine Engineer  Performing routine activities in the operation and maintenance of mining equipment and systems
4 Senior Project Geologist  Capable of designing and operating drilling programs to examine exploration potential in a location

Source: (Sacrison Engineering, 2007; Kinross Gold Corporation, 2007)

4. Earnings

Salary or earning for mining engineering jobs varies based on experiences, skills, industry, and job types. However, general mining engineers typically earn about $46,000 annually at minimum. However, for engineer who works in coal exploration may earn at least $50,000 per annum (Daub, 2006). Meanwhile, according to 2005 salary survey that conducted by National Association of Colleges and Employers, typically, mining engineers may expect starting salary about $48,643 per annum. Table 2 shows salary distribution in 2004 (Daub, 2006).

Table 2 Distribution of Mining Engineer Salary in 2004
Distribution of Engineers 10% 25% 50% 75% 90%
Salary $39,700 $50,500 $64,690 $83,050 $103,790

5. Benefits/Health Factors

In addition to attractive salary packages for mining engineers, they also receive several allowances such as travel, overtime, and medical allowances that not only cover the engineers but also their family (wife/spouse, children) (Daub, 2006).

6. Employment

In the U.S., the employment of mining engineer span from west to east coasts. However, there are concentrations of minerals in several areas such as Northern Michigan and Northern Minnesota for iron, Southwest for copper, and West Virginia for coal exploration (Daub, 2006). The purpose of jobs in mining engineering is to gather natural resources as raw materials for further processed that performed by other companies in order to produce final products or services (Sloan Career Cornerstone Center, 2007).

7. Typical Day

There are two general job types for mining engineers: office and on site jobs. For office jobs, the working day is typically from Monday to Friday. However, for on site jobs, they may conduct various working days that differs from one company to another. For example, a company may set rules 2:1 that means 2 month of full time jobs in exploration sites and 1 month for the breaks (Sloan Career Cornerstone Center, 2007).

8. Conclusion

Mining engineer is an attractive job since the occupations are often associated with high-wage jobs. However, as a rule, higher jobs mean higher risks. Therefore, we find that the underlying reasons of high-wage jobs for mining engineers are that they face high risk and require special skills.

Works Cited

Daub, Travis C. “Mining Engineering.” 2006. Retrieved January 26, 2007 from http://www.graduatingengineer.com/futuredisc/mining.html
Kinross Gold Corporation. “Senior Project Geologist.” 2007. Retrieved January 29, 2007 from http://www.miningusa.com/employ/Kinross/KINROSS12.htm
Minova USA Inc. “Mining Employment – Southwest.” Retrieved January 26, 2007 from http://www.miningusa.com/employ/sw.asp
Sacrison Engineering. “Mining Employment – Southwest.” 2007. Retrieved January 29, 2007 from http://www.miningusa.com/employ/sw.asp
Sloan Career Cornerstone Center. “Mining Engineering Overview.” 2007. Retrieved January 26, 2007 from http://www.careercornerstone.org/pdf/mining/mining.pdf
University of Exeter. “BEng Mining Engineering (UCAS code J110).” 2007. Retrieved January 29, 2007 from http://www.uec.ac.uk/csm/undergraduate-study/mining-engineering/

Porters Five Forces Analysis Of Aviation Sector Engineering Essay

Barriers to Entry The aviation sector is one of the most expensive and extremely risky sectors due to high capital costs. These costs include buying and leasing aircrafts, fuel, safety and security measures, labor, customer service, etc. The increasing fuel costs have affected the airline industry adversely as it has a direct impact on the operating costs. A weakening economy also affects the airline revenues as the demand for air travel and air freight reduces and business and leisure travelers become highly price sensitive. UAE restricts complete foreign ownership of a company. All the companies are supposed to have at least one or more national partners who account for at least 51% of their capital. Therefore, the existence of high start-up costs and other barriers discourages new competitors from easily entering this industry.

http://www.mlive.com/business/west-michigan/index.ssf/2011/02/rising_jet_fuel_costs_affectin.html

http://www.investopedia.com/features/industryhandbook/airline.asp

Competitive Rivalry

The rivalry existing in the UAE aviation industry is intense since there are over 110 airlines currently flying to more than 160 destinations from Dubai International Airport itself. Since Dubai is located in a very strategic location and it is the business capital of the Middle East, the aviation industry showed a very strong growth in the last decade. All the carriers are constantly struggling to steal the market share from each other. Each airline is trying to have competitive prices and is working on lowering its operational costs to increase efficiency and profitability. This has lead to saturation in the market and airlines have to compete to survive in the UAE airline industry.

http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “&HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “_nfpb=trueHYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “&HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “_pageLabel=lifeEventDetail

http://www.zawya.com/story.cfm/sidZAWYA20101229042209/ ae/%20carriers%20see%20brighter%20sky%20ahead

Supplier power

Irrespective of UAE aviation industry or the world aviation industry there are only 2 main suppliers, Boeing and Airbus. Fly Europe has a fleet of Boeing aircrafts and their supplier, Boeing, has a high bargaining power as there is no cut throat competition in the supplier industry. The switching costs from Boeing to Airbus is also extremely high because all the pilots and mechanics have to be trained to learn to operate the different kind of aircrafts. However, other suppliers of Fly Europe like the providers of on-board snacks for the travelers (who are willing to purchase them) don’t have high bargaining power since there are many other options available in the market. Fly Europe can purchase their snacks from some other cost-effective supplier as this would enable the customers to buy the snacks at reasonable prices.

http://www.investopedia.com/features/industryhandbook/airline.asp

Customer power

Air travel is expensive, so most of the leisure travelers have highly elastic demand. They can compare prices of different airlines easily over the internet as they have many different options while choosing an airline carrier. Business travelers pay a much higher average ticket price, approximately five times more than the average leisure ticket cost. Hence, the bargaining power of business travelers in the aviation sector is quite high since as they generate most of the revenues of the airline while the bargaining power of leisure passengers is low.

www.csus.edu/indiv/h/hany/Teaching/…/Lecture2_han.ppt

http://www.wikinvest.com/concept/Airline_Travel

Availability of substitutes

Airline is the fastest way to travel from one destination to another so there is no perfect substitute available. The other transportation options available to the customers are trains, cars, etc. The choice of the mode of transportation depends on the length of the route, consumer preferences, etc. For example – air travel isn’t practical and economical for short distances. High-speed video conferencing is emerging as an important substitute to air travel for business travelers as it provides a convenient, safer, time-saving and cost-effective to conduct important meetings. Reducing travel time results in employee productivity, effectiveness, decrease in stress and hassle, etc. The ’emergency’ air freight market is also getting affected as these days, urgent documents are sent through the e-mail which is a cheap and faster alternative. While, Sea freight is considered an economical and practical way to send bulky items which is affecting the ‘routine’ air freight market. Therefore, many companies are using these technologies to replace airline travel and freight.

http://www.businesseconomics.in/?p=638

http://www.investopedia.com/features/industryhandbook/airline.asp

Industry description

The aviation industry in the United Arab Emirates plays an essential part in its economy. It is one of the fastest growing aviation industries in the world. The General Civil Aviation Authority (GCAA), headquartered in Abu Dhabi, regulates civil aviation in the UAE. Foreign ownership and control of airlines in the UAE is restricted to a 49% equity stake. But in 2009, UAE signed an air liberalization policy statement with six other countries and the European Commission. The policy principles focuses on three main issues : freedom to access capital markets, freedom to do business, and freedom to price services. UAE’s aviation industry earned a profit of $15 billion profit in 2010. The air traffic movements grew at the rate of 13 .8% in November 2010 compared to November 2009. This sector is expected to be the 2nd largest aviation market with the growth rate of 10.2% in 2013. According to the forecasts, the UAE will have 82.3 million air travelers and will be handling 2.7 million tons by 2014. Dubai has become a major hub of aerospace both in the middle east as well as on a global level as traditional players like the USA and Europe continue to slump.

http://www.arabianbusiness.com/uae-passenger-traffic-hit-82-3m-by-2014-says-iata-380800.html

http://www.uaeinteract.com/docs/UAE_aviation_market_will_be_second-fastest_growing_by_2013/44449.htm

http://www.dancewithshadows.com/flights/dubai-aviation.asp

http://www.scoop.co.nz/stories/BU1012/S00286/10-more-liberalised-aviation-agreements-for-emirates-in-2010.htm

http://www.emirates.com/mv/English/about/public_affairs/liberalisation.aspx

http://www.zawya.com/story.cfm/sidZAWYA20101229042209 /ae/%20carriers%20see%20brighter%20sky%20ahead

http://www.gcaa.gov.ae/en/pages/welcomegcaa.aspx

http://english.alrroya.com/content/uae-signs-air-liberalisation-policy-statement-iata

Economic conditions

The United Arab Emirates (UAE) is one of the fastest growing economies in the world. It is the 2nd largest economy in the Middle East after Saudi Arabia. It is also a major player in the world energy markets as it has the 6th largest crude oil reserves and natural gas reserves. UAE is taking extensive measures to reduce its reliance on these natural resources as an income source and is diversifying its economy by investing in growing sectors like trade, finance, aerospace, tourism, etc. The global financial crisis slowed down the GDP growth in 2010. UAE authorities responded to this crisis by injecting $33 billion into the local financial sector and guaranteeing all deposits in international and local banks. Dubai was hit very badly by the recession as its real estate sector experienced a major downturn and it couldn’t meet its debt obligations. The central bank of UAE then provided support to the local banks while Dubai received a loan of $ 10 billion from Abu Dhabi to ease the debt crisis. However, the economy is expected to rebound in 2011. The non-hydrocarbons economy is expected to grow 2.2% in the year 2011. The oil prices averaged $79.6/barrel in 2010 are forecasted to rise to $90 billion in 2011. The GDP growth rate is expected to increase from 2.2% in 2010 to 3.3% in 2011. The government also plans to carry out large infrastructure projects which is supposed to result in a gradual recovery of the real estate sector.

http://www.propertyselect.com/dubai/news/what-impact-will-the-global-recession-have-on-dubai-property/1442

http://www.davisiaj.org/?p=210

http://www.uaeinteract.com/docs/UAE_non-oil_economy_to_rebound_in_2011-2012/41839.htm

http://www.english.globalarabnetwork.com/201101018483/Economics/uae-economy-gdp-rebounds-around-33-in-2011.html

https://www.cia.gov/library/publications/the-world-factbook/geos/ae.html

http://en.wikipedia.org/wiki/Economy_of_the_United_Arab_Emirates#cite_note-1

http://www.dfat.gov.au/geo/uae/uae_country_brief.html

http://www.zawya.com/marketing.cfm?zpHYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”&HYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”p=/countries/ae/macrowatch.cfm?eiusection=Country%20OutlookHYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”&HYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”cc

Systems Engineering: RTV Silicone Sealant Application System

Abstract

As technology advances seemingly exponentially in the 21st century, the need for more and more complex systems grows too. Continuous improvement is key to a successful, growing business. This envelops everything within the organisation and engineered systems are no exception. Complex engineered systems require a level of control, this control is important for producing quality products and services. Considering advancing technology and continuous improvement, organisations need to explore ways in which the performance of engineered systems can be maximised. Multi-agent systems (MAS) are a relatively new theory which is put into practice when monolithic systems cannot solve the problem, so as systems become more and more complex, the need for MAS increase.

Glossary of Terms

FTT – defined as the percentage of engines that pass a process first time.

JPH – the number of engines which pass through a process per hour.

RTV – room temperature vulcanisation silicone sealant.

MAS – Multi-agent systems

1. Introduction

At the Engine Manufacturing Centre (EMC), Jaguar Land Rover (JLR) manufacture and assemble diesel and petrol engines. The author is a process engineer within the diesel assembly hall, whose main job role is to improve any assembly processes that negatively impact first time through (FTT) or jobs per hour (JPH). Currently, the issue that is causing the largest impact on said deliverables is the automatic application of RTV silicone sealant to the engine block to form a seal with the rear cover (Figure 1, below).

This automated process has an average FTT of 61% and a value of 59 JPH over the past thirty days.  The target JPH value across the entire diesel assembly line is 68, which means this process is causing an average deficit of nine engines per hour causing considerable damage towards achieving production targets. The process itself is performed by two autonomous robots whereby one robot applies the sealant and the other robot holds the engine and moves it along a specified path. There are two HMIs present, one to program each robot. The robot holding the engine can be programmed with its position, its movement within the six degrees of freedom and its velocity. Whereas the only programmable functions are the start, end and speed and feed of the sealant.

2. Systems Engineering Life Cycle Stages

The role of systems engineering is to ensure the success of a system, judged by how well its requirements and development objectives are met, its operation in the field and the length of its useful operating life. Systems engineering aims to establish a technical approach that will aid the operational maintenance and the eventual upgrading of the system. A system life cycle is a term used to encapsulate the evolution of a new system, where it begins with a concept and grows through development into production, operation and lastly, destruction.

2.1 Concept Development

Where there is a desire for a new system, the concept development stage contains the planning and analysis required to affirm the need, the feasibility and the architecture for the new system to best satisfy the needs of the user.

There are four main objectives of the concept development stage:

  • Decide whether there is a market and need for a technically and economically feasible system.

  • Design and confirm the system requirements after exploring different system concepts (see figure 3 below). This stage converts the system which has been derived from the needs analysis, into an engineering oriented view for the concept definition and development. When looking at performance requirements it is important to identify the major functions needed to complete the actions needed. In the case of this example, its functional elements should include, power robot, control movement, control speed, and apply RTV. To aid with this activity a systems engineer would use a function category versus functional media diagram (figure 4).

  • Concept selection, agree on its characteristics and plan for the forthcoming stages of engineering, production and operation of the system. It answers the question “what are the key characteristics of a system concept that would achieve the most beneficial balance between capability, operational life, and cost?”[1].
  • Develop and validate any technological developments required by the new system.

2.2 Engineering Development

Figure 4 (below) shows three stages of engineering development. Firstly, the advanced development stage incorporates two important purposes. One being the identification and reduction of risks, the second being the development of system specifications. Secondly, the engineering design phase is considerably more detailed than any stages preceding it. Usually, this stage offers an opportunity for potential customers to get an early look at the product, who can, in turn, provide valuable feedback to the developers. Lastly, the integration and evaluation phase is where the new system is installed and subsequently checked to ensure that it meets customer requirements.

2.3 Post Development

Within the post-development phase there are two sub-phases; the production phase and the operations and support phase. The system is now being produced, for example, for a manufacturing environment. Occasionally there are unexpected issues that arise within the production of the system which requires a systems engineer to solve to prevent disruptions in the production schedule. Once the system is live, system support is critical. Maintenance personnel should be sufficient until more complex problems arise, where they need to call on the experience of systems engineers.

3. Function Block Diagram

4. Control Architecture

4.1 Centralised Control

The centralised control system architecture has one component designated as the controller which is responsible for managing the execution of other components. The term architecture is used to suggest a focus on the relationship between the major structural elements in a system. This architecture falls into two classes depending on the execution of the controlled components, either sequentially or in parallel. These are the call-return model, only applicable in sequential systems, and the manager model, used in concurrent systems [3].

The main reasons to use centralised control architecture is that it is simple to conceive and due to its omniscience it can make optimal decisions which take all factors into account. However this architecture does have drawbacks, most notably the expense in which is required to create the control architecture, the control algorithm needs to be very complex. Furthermore, the degradation of any signal path can cripple the function of the entire system, so they can be fragile.

4.2 Hierarchical Control

Organised in a hierarchical tree, this control system decomposes the problem and allocates it to separate controllers which take control of a subset of the system functions. This can exist over a number of levels, meaning each function could be controlled individually. Optimal control is still possible within a hierarchical architecture as there is always a path to a top-level node; however, not all information can travel through every path. Commonly some filtering of data occurs between levels.

In contrast to centralised control, the control algorithm is much simpler due to decomposition. This means the time and cost of implementation are much lower. Between the different branches of the structure, there is a degree of independence, reducing the effect of system degradation. However, there is usually delay in the processing of each algorithm and in the feedback loop.

4.3 Heterarchical Control

Heterarchical control architecture is more robust than hierarchical control and is very flexible and extensible. Additional system functions, such as manufacturing processes and equipment, can be added with almost no added system control cost. However, heterarchical architecture lacks centralised visibility of the system as a whole which means planning can be sub-optimal; this control system is sometimes referred to as being short sighted. Though this does mean that short-term decision making is very good. [4]

4.4 RTV Robot Cell Control System

Centralised control is not suitable for the RTV robot cell system. The reason for this being the fact that it is too expensive to create and change. Additionally, the fault tolerance of the control system must be taken into account. A manufacturing line with such high demand for machine availability must not be crippled by the loss of just one signal.

The most suitable and, as it happens, the current control system architecture for this system would be hierarchical. The main downside to this architecture is its response time when there are lots and lots of levels. However, its strengths are combines the strengths of the other two control architectures discussed, albeit slightly diluted. Heterarchical control has strengths that would be fantastic for an automated cell in a manufacturing environment, but its weaknesses deem it unacceptable. If one could combine hierarchical and heterarchical architectures and take away the myopic nature of heterarchy, it could be a system which improves how automated cells are controlled.

 

5. Multi-Agent Systems

A multi-agent system is a system composed of multiple interacting intelligent agents. For problems that are too difficult or even impossible for an individual agent to solve, multi-agent systems can be used. Commonly thought of as being computerised, the agents within a multi-agent system could also be robots, humans, human teams or a combination of humans and robots. There are three different types of agents:

  • Passive agents, agents without goals.
  • Active agents, agents with simple goals.
  • Cognitive agents, agents containing complex calculations.


Agents can also be reactive or deliberative, this can be represented by the BDI model (figure 6- below).

BDI stands for Belief, Desire, Intention where belief is knowledge of the environment, desire is the need to satisfy an objective and intention is the ability to command action(s).

Deliberative agents extend the BDI model to include a symbolic model of the external environment- including data and relationships, memory, the ability to plan and the ability to choose between alternative actions.

One could make a case to incorporate multi-agent systems within an automated robot cell at the present time. There is a need for configurability, for example, when a new derivative of engine is introduced and the robot has to be programmed to function differently. The system will need robustness, so if one agent is lost it does not compromise the whole system. However using a hierarchical architecture provides a sufficient degree of configurability and robustness with less cost and complexity. Multi-agent systems provide dynamic task allocation rather than pre-planned schedules, for an automated robot cell this is not needed, automation needs efficiency in static conditions [5].

6. Conclusion

The automated RTV application robot cell is currently in the operational phase of the system lifecycle, it is in need of improvement however it is not the system that needs improving. The system works as it is meant to, however, it is the incorrect system that is in place which is causing the problems.

As automotive technology moves towards electrification and autonomous behaviour there will be a need to include more and more multi-agent systems within the vehicles themselves but also within manufacturing systems. There will be a need for greater flexibility, adaptability, reconfigurability and collaboration.  Unfortunately, incorporating a multi-agent system in this instance would not have a positive impact on this system.

7. References

[1] Kossiakoff, Sweet, Seymour, Biemer. (2011). System Life Cycle. In: Sage, A. Systems Engineering Principles and Practice. 2nd ed. New Jersey: John Wiley & Sons, Inc.. 77.

[2] Lecture provided PowerPoint slides.

[3] Ian Sommerville. (2008). Centralized Control. Available: https://ifs.host.cs.st-andrews.ac.uk/Books/SE9/Web/Architecture/ArchPatterns/CentralControl.html. Last accessed 14/02/17.

[4] J.M. van de Mortel-Fronczak and J.E. Rooda. (1997). Heterarchical Control Systems for Production Cells. . 1 (1), 213-217.

[5] Various. (). Multi-agent system. Available: https://en.wikipedia.org/wiki/Multi-agent_system. Last accessed 20/02/17.

Porters Five Forces Analysis Of Aviation Sector Engineering Essay

Barriers to Entry The aviation sector is one of the most expensive and extremely risky sectors due to high capital costs. These costs include buying and leasing aircrafts, fuel, safety and security measures, labor, customer service, etc. The increasing fuel costs have affected the airline industry adversely as it has a direct impact on the operating costs. A weakening economy also affects the airline revenues as the demand for air travel and air freight reduces and business and leisure travelers become highly price sensitive. UAE restricts complete foreign ownership of a company. All the companies are supposed to have at least one or more national partners who account for at least 51% of their capital. Therefore, the existence of high start-up costs and other barriers discourages new competitors from easily entering this industry.

http://www.mlive.com/business/west-michigan/index.ssf/2011/02/rising_jet_fuel_costs_affectin.html

http://www.investopedia.com/features/industryhandbook/airline.asp

Competitive Rivalry

The rivalry existing in the UAE aviation industry is intense since there are over 110 airlines currently flying to more than 160 destinations from Dubai International Airport itself. Since Dubai is located in a very strategic location and it is the business capital of the Middle East, the aviation industry showed a very strong growth in the last decade. All the carriers are constantly struggling to steal the market share from each other. Each airline is trying to have competitive prices and is working on lowering its operational costs to increase efficiency and profitability. This has lead to saturation in the market and airlines have to compete to survive in the UAE airline industry.

http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “&HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “_nfpb=trueHYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “&HYPERLINK “http://www.dubai.ae/en.portal?vstrs_arrv_dxb,vstrs_arive_air,1,&_nfpb=true&_pageLabel=lifeEventDetail “_pageLabel=lifeEventDetail

http://www.zawya.com/story.cfm/sidZAWYA20101229042209/ ae/%20carriers%20see%20brighter%20sky%20ahead

Supplier power

Irrespective of UAE aviation industry or the world aviation industry there are only 2 main suppliers, Boeing and Airbus. Fly Europe has a fleet of Boeing aircrafts and their supplier, Boeing, has a high bargaining power as there is no cut throat competition in the supplier industry. The switching costs from Boeing to Airbus is also extremely high because all the pilots and mechanics have to be trained to learn to operate the different kind of aircrafts. However, other suppliers of Fly Europe like the providers of on-board snacks for the travelers (who are willing to purchase them) don’t have high bargaining power since there are many other options available in the market. Fly Europe can purchase their snacks from some other cost-effective supplier as this would enable the customers to buy the snacks at reasonable prices.

http://www.investopedia.com/features/industryhandbook/airline.asp

Customer power

Air travel is expensive, so most of the leisure travelers have highly elastic demand. They can compare prices of different airlines easily over the internet as they have many different options while choosing an airline carrier. Business travelers pay a much higher average ticket price, approximately five times more than the average leisure ticket cost. Hence, the bargaining power of business travelers in the aviation sector is quite high since as they generate most of the revenues of the airline while the bargaining power of leisure passengers is low.

www.csus.edu/indiv/h/hany/Teaching/…/Lecture2_han.ppt

http://www.wikinvest.com/concept/Airline_Travel

Availability of substitutes

Airline is the fastest way to travel from one destination to another so there is no perfect substitute available. The other transportation options available to the customers are trains, cars, etc. The choice of the mode of transportation depends on the length of the route, consumer preferences, etc. For example – air travel isn’t practical and economical for short distances. High-speed video conferencing is emerging as an important substitute to air travel for business travelers as it provides a convenient, safer, time-saving and cost-effective to conduct important meetings. Reducing travel time results in employee productivity, effectiveness, decrease in stress and hassle, etc. The ’emergency’ air freight market is also getting affected as these days, urgent documents are sent through the e-mail which is a cheap and faster alternative. While, Sea freight is considered an economical and practical way to send bulky items which is affecting the ‘routine’ air freight market. Therefore, many companies are using these technologies to replace airline travel and freight.

http://www.businesseconomics.in/?p=638

http://www.investopedia.com/features/industryhandbook/airline.asp

Industry description

The aviation industry in the United Arab Emirates plays an essential part in its economy. It is one of the fastest growing aviation industries in the world. The General Civil Aviation Authority (GCAA), headquartered in Abu Dhabi, regulates civil aviation in the UAE. Foreign ownership and control of airlines in the UAE is restricted to a 49% equity stake. But in 2009, UAE signed an air liberalization policy statement with six other countries and the European Commission. The policy principles focuses on three main issues : freedom to access capital markets, freedom to do business, and freedom to price services. UAE’s aviation industry earned a profit of $15 billion profit in 2010. The air traffic movements grew at the rate of 13 .8% in November 2010 compared to November 2009. This sector is expected to be the 2nd largest aviation market with the growth rate of 10.2% in 2013. According to the forecasts, the UAE will have 82.3 million air travelers and will be handling 2.7 million tons by 2014. Dubai has become a major hub of aerospace both in the middle east as well as on a global level as traditional players like the USA and Europe continue to slump.

http://www.arabianbusiness.com/uae-passenger-traffic-hit-82-3m-by-2014-says-iata-380800.html

http://www.uaeinteract.com/docs/UAE_aviation_market_will_be_second-fastest_growing_by_2013/44449.htm

http://www.dancewithshadows.com/flights/dubai-aviation.asp

http://www.scoop.co.nz/stories/BU1012/S00286/10-more-liberalised-aviation-agreements-for-emirates-in-2010.htm

http://www.emirates.com/mv/English/about/public_affairs/liberalisation.aspx

http://www.zawya.com/story.cfm/sidZAWYA20101229042209 /ae/%20carriers%20see%20brighter%20sky%20ahead

http://www.gcaa.gov.ae/en/pages/welcomegcaa.aspx

http://english.alrroya.com/content/uae-signs-air-liberalisation-policy-statement-iata

Economic conditions

The United Arab Emirates (UAE) is one of the fastest growing economies in the world. It is the 2nd largest economy in the Middle East after Saudi Arabia. It is also a major player in the world energy markets as it has the 6th largest crude oil reserves and natural gas reserves. UAE is taking extensive measures to reduce its reliance on these natural resources as an income source and is diversifying its economy by investing in growing sectors like trade, finance, aerospace, tourism, etc. The global financial crisis slowed down the GDP growth in 2010. UAE authorities responded to this crisis by injecting $33 billion into the local financial sector and guaranteeing all deposits in international and local banks. Dubai was hit very badly by the recession as its real estate sector experienced a major downturn and it couldn’t meet its debt obligations. The central bank of UAE then provided support to the local banks while Dubai received a loan of $ 10 billion from Abu Dhabi to ease the debt crisis. However, the economy is expected to rebound in 2011. The non-hydrocarbons economy is expected to grow 2.2% in the year 2011. The oil prices averaged $79.6/barrel in 2010 are forecasted to rise to $90 billion in 2011. The GDP growth rate is expected to increase from 2.2% in 2010 to 3.3% in 2011. The government also plans to carry out large infrastructure projects which is supposed to result in a gradual recovery of the real estate sector.

http://www.propertyselect.com/dubai/news/what-impact-will-the-global-recession-have-on-dubai-property/1442

http://www.davisiaj.org/?p=210

http://www.uaeinteract.com/docs/UAE_non-oil_economy_to_rebound_in_2011-2012/41839.htm

http://www.english.globalarabnetwork.com/201101018483/Economics/uae-economy-gdp-rebounds-around-33-in-2011.html

https://www.cia.gov/library/publications/the-world-factbook/geos/ae.html

http://en.wikipedia.org/wiki/Economy_of_the_United_Arab_Emirates#cite_note-1

http://www.dfat.gov.au/geo/uae/uae_country_brief.html

http://www.zawya.com/marketing.cfm?zpHYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”&HYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”p=/countries/ae/macrowatch.cfm?eiusection=Country%20OutlookHYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”&HYPERLINK “http://www.zawya.com/marketing.cfm?zp&p=/countries/ae/macrowatch.cfm?eiusection=Country Outlook&cc”cc

Systems Engineering: RTV Silicone Sealant Application System

Abstract

As technology advances seemingly exponentially in the 21st century, the need for more and more complex systems grows too. Continuous improvement is key to a successful, growing business. This envelops everything within the organisation and engineered systems are no exception. Complex engineered systems require a level of control, this control is important for producing quality products and services. Considering advancing technology and continuous improvement, organisations need to explore ways in which the performance of engineered systems can be maximised. Multi-agent systems (MAS) are a relatively new theory which is put into practice when monolithic systems cannot solve the problem, so as systems become more and more complex, the need for MAS increase.

Glossary of Terms

FTT – defined as the percentage of engines that pass a process first time.

JPH – the number of engines which pass through a process per hour.

RTV – room temperature vulcanisation silicone sealant.

MAS – Multi-agent systems

1. Introduction

At the Engine Manufacturing Centre (EMC), Jaguar Land Rover (JLR) manufacture and assemble diesel and petrol engines. The author is a process engineer within the diesel assembly hall, whose main job role is to improve any assembly processes that negatively impact first time through (FTT) or jobs per hour (JPH). Currently, the issue that is causing the largest impact on said deliverables is the automatic application of RTV silicone sealant to the engine block to form a seal with the rear cover (Figure 1, below).

This automated process has an average FTT of 61% and a value of 59 JPH over the past thirty days.  The target JPH value across the entire diesel assembly line is 68, which means this process is causing an average deficit of nine engines per hour causing considerable damage towards achieving production targets. The process itself is performed by two autonomous robots whereby one robot applies the sealant and the other robot holds the engine and moves it along a specified path. There are two HMIs present, one to program each robot. The robot holding the engine can be programmed with its position, its movement within the six degrees of freedom and its velocity. Whereas the only programmable functions are the start, end and speed and feed of the sealant.

2. Systems Engineering Life Cycle Stages

The role of systems engineering is to ensure the success of a system, judged by how well its requirements and development objectives are met, its operation in the field and the length of its useful operating life. Systems engineering aims to establish a technical approach that will aid the operational maintenance and the eventual upgrading of the system. A system life cycle is a term used to encapsulate the evolution of a new system, where it begins with a concept and grows through development into production, operation and lastly, destruction.

2.1 Concept Development

Where there is a desire for a new system, the concept development stage contains the planning and analysis required to affirm the need, the feasibility and the architecture for the new system to best satisfy the needs of the user.

There are four main objectives of the concept development stage:

  • Decide whether there is a market and need for a technically and economically feasible system.

  • Design and confirm the system requirements after exploring different system concepts (see figure 3 below). This stage converts the system which has been derived from the needs analysis, into an engineering oriented view for the concept definition and development. When looking at performance requirements it is important to identify the major functions needed to complete the actions needed. In the case of this example, its functional elements should include, power robot, control movement, control speed, and apply RTV. To aid with this activity a systems engineer would use a function category versus functional media diagram (figure 4).

  • Concept selection, agree on its characteristics and plan for the forthcoming stages of engineering, production and operation of the system. It answers the question “what are the key characteristics of a system concept that would achieve the most beneficial balance between capability, operational life, and cost?”[1].
  • Develop and validate any technological developments required by the new system.

2.2 Engineering Development

Figure 4 (below) shows three stages of engineering development. Firstly, the advanced development stage incorporates two important purposes. One being the identification and reduction of risks, the second being the development of system specifications. Secondly, the engineering design phase is considerably more detailed than any stages preceding it. Usually, this stage offers an opportunity for potential customers to get an early look at the product, who can, in turn, provide valuable feedback to the developers. Lastly, the integration and evaluation phase is where the new system is installed and subsequently checked to ensure that it meets customer requirements.

2.3 Post Development

Within the post-development phase there are two sub-phases; the production phase and the operations and support phase. The system is now being produced, for example, for a manufacturing environment. Occasionally there are unexpected issues that arise within the production of the system which requires a systems engineer to solve to prevent disruptions in the production schedule. Once the system is live, system support is critical. Maintenance personnel should be sufficient until more complex problems arise, where they need to call on the experience of systems engineers.

3. Function Block Diagram

4. Control Architecture

4.1 Centralised Control

The centralised control system architecture has one component designated as the controller which is responsible for managing the execution of other components. The term architecture is used to suggest a focus on the relationship between the major structural elements in a system. This architecture falls into two classes depending on the execution of the controlled components, either sequentially or in parallel. These are the call-return model, only applicable in sequential systems, and the manager model, used in concurrent systems [3].

The main reasons to use centralised control architecture is that it is simple to conceive and due to its omniscience it can make optimal decisions which take all factors into account. However this architecture does have drawbacks, most notably the expense in which is required to create the control architecture, the control algorithm needs to be very complex. Furthermore, the degradation of any signal path can cripple the function of the entire system, so they can be fragile.

4.2 Hierarchical Control

Organised in a hierarchical tree, this control system decomposes the problem and allocates it to separate controllers which take control of a subset of the system functions. This can exist over a number of levels, meaning each function could be controlled individually. Optimal control is still possible within a hierarchical architecture as there is always a path to a top-level node; however, not all information can travel through every path. Commonly some filtering of data occurs between levels.

In contrast to centralised control, the control algorithm is much simpler due to decomposition. This means the time and cost of implementation are much lower. Between the different branches of the structure, there is a degree of independence, reducing the effect of system degradation. However, there is usually delay in the processing of each algorithm and in the feedback loop.

4.3 Heterarchical Control

Heterarchical control architecture is more robust than hierarchical control and is very flexible and extensible. Additional system functions, such as manufacturing processes and equipment, can be added with almost no added system control cost. However, heterarchical architecture lacks centralised visibility of the system as a whole which means planning can be sub-optimal; this control system is sometimes referred to as being short sighted. Though this does mean that short-term decision making is very good. [4]

4.4 RTV Robot Cell Control System

Centralised control is not suitable for the RTV robot cell system. The reason for this being the fact that it is too expensive to create and change. Additionally, the fault tolerance of the control system must be taken into account. A manufacturing line with such high demand for machine availability must not be crippled by the loss of just one signal.

The most suitable and, as it happens, the current control system architecture for this system would be hierarchical. The main downside to this architecture is its response time when there are lots and lots of levels. However, its strengths are combines the strengths of the other two control architectures discussed, albeit slightly diluted. Heterarchical control has strengths that would be fantastic for an automated cell in a manufacturing environment, but its weaknesses deem it unacceptable. If one could combine hierarchical and heterarchical architectures and take away the myopic nature of heterarchy, it could be a system which improves how automated cells are controlled.

 

5. Multi-Agent Systems

A multi-agent system is a system composed of multiple interacting intelligent agents. For problems that are too difficult or even impossible for an individual agent to solve, multi-agent systems can be used. Commonly thought of as being computerised, the agents within a multi-agent system could also be robots, humans, human teams or a combination of humans and robots. There are three different types of agents:

  • Passive agents, agents without goals.
  • Active agents, agents with simple goals.
  • Cognitive agents, agents containing complex calculations.


Agents can also be reactive or deliberative, this can be represented by the BDI model (figure 6- below).

BDI stands for Belief, Desire, Intention where belief is knowledge of the environment, desire is the need to satisfy an objective and intention is the ability to command action(s).

Deliberative agents extend the BDI model to include a symbolic model of the external environment- including data and relationships, memory, the ability to plan and the ability to choose between alternative actions.

One could make a case to incorporate multi-agent systems within an automated robot cell at the present time. There is a need for configurability, for example, when a new derivative of engine is introduced and the robot has to be programmed to function differently. The system will need robustness, so if one agent is lost it does not compromise the whole system. However using a hierarchical architecture provides a sufficient degree of configurability and robustness with less cost and complexity. Multi-agent systems provide dynamic task allocation rather than pre-planned schedules, for an automated robot cell this is not needed, automation needs efficiency in static conditions [5].

6. Conclusion

The automated RTV application robot cell is currently in the operational phase of the system lifecycle, it is in need of improvement however it is not the system that needs improving. The system works as it is meant to, however, it is the incorrect system that is in place which is causing the problems.

As automotive technology moves towards electrification and autonomous behaviour there will be a need to include more and more multi-agent systems within the vehicles themselves but also within manufacturing systems. There will be a need for greater flexibility, adaptability, reconfigurability and collaboration.  Unfortunately, incorporating a multi-agent system in this instance would not have a positive impact on this system.

7. References

[1] Kossiakoff, Sweet, Seymour, Biemer. (2011). System Life Cycle. In: Sage, A. Systems Engineering Principles and Practice. 2nd ed. New Jersey: John Wiley & Sons, Inc.. 77.

[2] Lecture provided PowerPoint slides.

[3] Ian Sommerville. (2008). Centralized Control. Available: https://ifs.host.cs.st-andrews.ac.uk/Books/SE9/Web/Architecture/ArchPatterns/CentralControl.html. Last accessed 14/02/17.

[4] J.M. van de Mortel-Fronczak and J.E. Rooda. (1997). Heterarchical Control Systems for Production Cells. . 1 (1), 213-217.

[5] Various. (). Multi-agent system. Available: https://en.wikipedia.org/wiki/Multi-agent_system. Last accessed 20/02/17.

Numerical Differential Equation Analysis Package

The Numerical Differential Equation Analysis package combines functionality for analyzing differential equations using Butcher trees, Gaussian quadrature, and Newton-Cotes quadrature.

Butcher

Runge-Kutta methods are useful for numerically solving certain types of ordinary differential equations. Deriving high-order Runge-Kutta methods is no easy task, however. There are several reasons for this. The first difficulty is in finding the so-called order conditions. These are nonlinear equations in the coefficients for the method that must be satisfied to make the error in the method of order O (hn) for some integer n where h is the step size. The second difficulty is in solving these equations. Besides being nonlinear, there is generally no unique solution, and many heuristics and simplifying assumptions are usually made. Finally, there is the problem of combinatorial explosion. For a twelfth-order method there are 7813 order conditions!

This package performs the first task: finding the order conditions that must be satisfied. The result is expressed in terms of unknown coefficients aij, bj, and ci. The s-stage Runge-Kutta method to advance from x to x+h is then

where

Sums of the elements in the rows of the matrix [aij] occur repeatedly in the conditions imposed on aij and bj. In recognition of this and as a notational convenience it is usual to introduce the coefficients ci and the definition

This definition is referred to as the row-sum condition and is the first in a sequence of row-simplifying conditions.

If aij=0 for all i≤j the method is explicit; that is, each of the Yi (x+h) is defined in terms of previously computed values. If the matrix [aij] is not strictly lower triangular, the method is implicit and requires the solution of a (generally nonlinear) system of equations for each timestep. A diagonally implicit method has aij=0 for all i

There are several ways to express the order conditions. If the number of stages s is specified as a positive integer, the order conditions are expressed in terms of sums of explicit terms. If the number of stages is specified as a symbol, the order conditions will involve symbolic sums. If the number of stages is not specified at all, the order conditions will be expressed in stage-independent tensor notation. In addition to the matrix a and the vectors b and c, this notation involves the vector e, which is composed of all ones. This notation has two distinct advantages: it is independent of the number of stages s and it is independent of the particular Runge-Kutta method.

For further details of the theory see the references.

ai,j

the coefficient of f(Yj(x)) in the formula for Yi(x) of the method

bj

the coefficient of f(Yj(x)) in the formula for Y(x) of the method

ci

a notational convenience for aij

e

a notational convenience for the vector (1, 1, 1, …)

Notation used by functions for Butcher.

RungeKuttaOrderConditions[p,s]

give a list of the order conditions that any s-stage Runge-Kutta method of order p must satisfy

ButcherPrincipalError[p,s]

give a list of the order p+1 terms appearing in the Taylor series expansion of the error for an order-p, s-stage Runge-Kutta method

RungeKuttaOrderConditions[p], ButcherPrincipalError[p]

give the result in stage-independent tensor notation

Functions associated with the order conditions of Runge-Kutta methods.

ButcherRowSum

specify whether the row-sum conditions for the ci should be explicitly included in the list of order conditions

ButcherSimplify

specify whether to apply Butcher’s row and column simplifying assumptions

Some options for RungeKuttaOrderConditions.

This gives the number of order conditions for each order up through order 10. Notice the combinatorial explosion.

In[2]:=

 

Out[2]=

 

This gives the order conditions that must be satisfied by any first-order, 3-stage Runge-Kutta method, explicitly including the row-sum conditions.

In[3]:=

 

Out[3]=

 

These are the order conditions that must be satisfied by any second-order, 3-stage Runge-Kutta method. Here the row-sum conditions are not included.

In[4]:=

 

Out[4]=

 

It should be noted that the sums involved on the left-hand sides of the order conditions will be left in symbolic form and not expanded if the number of stages is left as a symbolic argument. This will greatly simplify the results for high-order, many-stage methods. An even more compact form results if you do not specify the number of stages at all and the answer is given in tensor form.

These are the order conditions that must be satisfied by any second-order, s-stage method.

In[5]:=

 

Out[5]=

 

Replacing s by 3 gives the same result asRungeKuttaOrderConditions.

In[6]:=

 

Out[6]=

 

These are the order conditions that must be satisfied by any second-order method. This uses tensor notation. The vector e is a vector of ones whose length is the number of stages.

In[7]:=

 

Out[7]=

 

The tensor notation can likewise be expanded to give the conditions in full.

In[8]:=

 

Out[8]=

 

These are the principal error coefficients for any third-order method.

In[9]:=

 

Out[9]=

 

This is a bound on the local error of any third-order method in the limit as h approaches 0, normalized to eliminate the effects of the ODE.

In[10]:=

 

Out[10]=

 

Here are the order conditions that must be satisfied by any fourth-order, 1-stage Runge-Kutta method. Note that there is no possible way for these order conditions to be satisfied; there need to be more stages (the second argument must be larger) for there to be sufficiently many unknowns to satisfy all of the conditions.

In[11]:=

 

Out[11]=

 

 

RungeKuttaMethod

specify the type of Runge-Kutta method for which order conditions are being sought

Explicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an explicit Runge-Kutta method

DiagonallyImplicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for a diagonally implicit Runge-Kutta method

Implicit

a setting for the option RungeKuttaMethod specifying that the order conditions are to be for an implicit Runge-Kutta method

$RungeKuttaMethod

a global variable whose value can be set to Explicit, DiagonallyImplicit, or Implicit

Controlling the type of Runge-Kutta method in RungeKuttaOrderConditions and related functions.

RungeKuttaOrderConditions and certain related functions have the option RungeKuttaMethod with default setting $RungeKuttaMethod. Normally you will want to determine the Runge-Kutta method being considered by setting $RungeKuttaMethod to one of Implicit, DiagonallyImplicit, and Explicit, but you can specify an option setting or even change the default for an individual function.

These are the order conditions that must be satisfied by any second-order, 3-stage diagonally implicit Runge-Kutta method.

In[12]:=

 

Out[12]=

 

An alternative (but less efficient) way to get a diagonally implicit method is to force a to be lower triangular by replacing upper-triangular elements with 0.

In[13]:=

 

Out[13]=

 

These are the order conditions that must be satisfied by any third-order, 2-stage explicit Runge-Kutta method. The contradiction in the order conditions indicates that no such method is possible, a result which holds for any explicit Runge-Kutta method when the number of stages is less than the order.

In[14]:=

 

Out[14]=

 

 

ButcherColumnConditions[p,s]

give the column simplifying conditions up to and including order p for s stages

ButcherRowConditions[p,s]

give the row simplifying conditions up to and including order p for s stages

ButcherQuadratureConditions[p,s]

give the quadrature conditions up to and including order p for s stages

ButcherColumnConditions[p], ButcherRowConditions[p], etc.

give the result in stage-independent tensor notation

More functions associated with the order conditions of Runge-Kutta methods.

Butcher showed that the number and complexity of the order conditions can be reduced considerably at high orders by the adoption of so-called simplifying assumptions. For example, this reduction can be accomplished by adopting sufficient row and column simplifying assumptions and quadrature-type order conditions. The option ButcherSimplify in RungeKuttaOrderConditions can be used to determine these automatically.

These are the column simplifying conditions up to order 4.

In[15]:=

 

Out[15]=

 

These are the row simplifying conditions up to order 4.

In[16]:=

 

Out[16]=

 

These are the quadrature conditions up to order 4.

In[17]:=

 

Out[17]=

 

Trees are fundamental objects in Butcher’s formalism. They yield both the derivative in a power series expansion of a Runge-Kutta method and the related order constraint on the coefficients. This package provides a number of functions related to Butcher trees.

f

the elementary symbol used in the representation of Butcher trees

ButcherTrees[p]

give a list, partitioned by order, of the trees for any Runge-Kutta method of order p

ButcherTreeSimplify[p,Eta,Xi]

give the set of trees through order p that are not reduced by Butcher’s simplifying assumptions, assuming that the quadrature conditions through order p, the row simplifying conditions through order Eta, and the column simplifying conditions through order Xiall hold. The result is grouped by order, starting with the first nonvanishing trees

ButcherTreeCount[p]

give a list of the number of trees through order p

ButcherTreeQ[tree]

give True if the tree or list of trees tree is valid functional syntax, and False otherwise

Constructing and enumerating Butcher trees.

This gives the trees that are needed for any third-order method. The trees are represented in a functional form in terms of the elementary symbol f.

In[18]:=

 

Out[18]=

 

This tests the validity of the syntax of two trees. Butcher trees must be constructed using multiplication, exponentiation or application of the function f.

In[19]:=

 

Out[19]=

 

This evaluates the number of trees at each order through order 10. The result is equivalent to Out[2] but the calculation is much more efficient since it does not actually involve constructing order conditions or trees.

In[20]:=

 

Out[20]=

 

The previous result can be used to calculate the total number of trees required at each order through order10.

In[21]:=

 

Out[21]=

 

The number of constraints for a method using row and column simplifying assumptions depends upon the number of stages. ButcherTreeSimplify gives the Butcher trees that are not reduced assuming that these assumptions hold.

This gives the additional trees that are necessary for a fourth-order method assuming that the quadrature conditions through order 4 and the row and column simplifying assumptions of order 1 hold. The result is a single tree of order 4 (which corresponds to a single fourth-order condition).

In[22]:=

 

Out[22]=

 

It is often useful to be able to visualize a tree or forest of trees graphically. For example, depicting trees yields insight, which can in turn be used to aid in the construction of Runge-Kutta methods.

ButcherPlot[tree]

give a plot of the tree tree

ButcherPlot[{tree1,tree2,…}]

give an array of plots of the trees in the forest {tree1, tree2,…}

Drawing Butcher trees.

ButcherPlotColumns

specify the number of columns in the GraphicsGrid plot of a list of trees

ButcherPlotLabel

specify a list of plot labels to be used to label the nodes of the plot

ButcherPlotNodeSize

specify a scaling factor for the nodes of the trees in the plot

ButcherPlotRootSize

specify a scaling factor for the highlighting of the root of each tree in the plot; a zero value does not highlight roots

Options to ButcherPlot.

This plots and labels the trees through order 4.

In[23]:=

 

Out[23]=

 

In addition to generating and drawing Butcher trees, many functions are provided for measuring and manipulating them. For a complete description of the importance of these functions, see Butcher.

ButcherHeight[tree]

give the height of the tree tree

ButcherWidth[tree]

give the width of the tree tree

ButcherOrder[tree]

give the order, or number of vertices, of the tree tree

ButcherAlpha[tree]

give the number of ways of labeling the vertices of the tree tree with a totally ordered set of labels such that if (m, n) is an edge, then m

ButcherBeta[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree]-1 distinct labels such that the root is not labeled, but every other vertex is labeled

ButcherBeta[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled and the root is not labeled

ButcherBetaBar[tree]

give the number of ways of labeling the tree tree with ButcherOrder[tree] distinct labels such that every node, including the root, is labeled

ButcherBetaBar[n,tree]

give the number of ways of labeling n of the vertices of the tree with n distinct labels such that every leaf is labeled

ButcherGamma[tree]

give the density of the tree tree; the reciprocal of the density is the right-hand side of the order condition imposed by tree

ButcherPhi[tree,s]

give the weight of the tree tree; the weight CapitalPhi(tree) is the left-hand side of the order condition imposed by tree

ButcherPhi[tree]

give CapitalPhi(tree) using tensor notation

ButcherSigma[tree]

give the order of the symmetry group of isomorphisms of the tree tree with itself

Other functions associated with Butcher trees.

This gives the order of the tree f[f[f[f] f^2]].

In[24]:=

 

Out[24]=

 

This gives the density of the tree f[f[f[f] f^2]].

In[25]:=

 

Out[25]=

 

This gives the elementary weight function imposed by f[f[f[f] f^2]] for an s-stage method.

In[26]:=

 

Out[26]=

 

The subscript notation is a formatting device and the subscripts are really just the indexed variable NumericalDifferentialEquationAnalysis`Private`$i.

In[27]:=

 

Out[27]//FullForm=

   
   

It is also possible to obtain solutions to the order conditions using Solve and related functions. Many issues related to the construction Runge-Kutta methods using this package can be found in Sofroniou. The article also contains details concerning algorithms used in Butcher.m and discusses applications.

Gaussian Quadrature

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod-based algorithm. The Gaussian quadrature functionality provided in Numerical Differential Equation Analysis allows you to easily study some of the theory behind ordinary Gaussian quadrature which is a little less sophisticated.

The basic idea behind Gaussian quadrature is to approximate the value if an integral as a linear combination of values of the integrand evaluated at specific points:

Since there are 2n free parameters to be chosen (both the abscissas xi and the weights wi) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about 2n. In addition to knowing what the optimal abscissas and weights are, it is often desirable to know how large the error in the approximation will be. This package allows you to answer both of these questions.

GaussianQuadratureWeights[n,a,b]

give a list of the pairs (xi, wi) to machine precision for quadrature on the interval a to b

GaussianQuadratureError[n,f,a,b]

 

give the error to machine precision

GaussianQuadratureWeights[n,a,b,prec]

 

give a list of the pairs (xi, wi) to precision prec

GaussianQuadratureError[n,f,a,b,prec]

 

give the error to precision prec

   

Finding formulas for Gaussian quadrature.

This gives the abscissas and weights for the five-point Gaussian quadrature formula on the interval (-3, 7).

In[2]:=

 

Out[2]=

 

Here is the error in that formula. Unfortunately it involves the tenth derivative of f at an unknown point so you don’t really know what the error itself is.

In[3]:=

 

Out[3]=

 

You can see that the error decreases rapidly with the length of the interval.

In[4]:=

 

Out[4]=

 

Newton-Cotes

As one of its methods, the Mathematica function NIntegrate uses a fairly sophisticated Gauss-Kronrod based algorithm. Other types of quadrature formulas exist, each with their own advantages. For example, Gaussian quadrature uses values of the integrand at oddly spaced abscissas. If you want to integrate a function presented in tabular form at equally spaced abscissas, it won’t work very well. An alternative is to use Newton-Cotes quadrature.

The basic idea behind Newton-Cotes quadrature is to approximate the value of an integral as a linear combination of values of the integrand evaluated at equally spaced points:

In addition, there is the question of whether or not to include the end points in the sum. If they are included, the quadrature formula is referred to as a closed formula. If not, it is an open formula. If the formula is open there is some ambiguity as to where the first abscissa is to be placed. The open formulas given in this package have the first abscissa one half step from the lower end point.

Since there are n free parameters to be chosen (the weights) and since both integration and the sum are linear operations, you can expect to be able to make the formula correct for all polynomials of degree less than about n. In addition to knowing what the weights are, it is often desi

3D Technology: Types and Uses

CHAPTER 1: INTRODUCTION

This report will focus on how different 3D technologies work, it will include the entire work flow, from recording the action, encoding the footage, playing back the media via a cinema projector or television and finally how the audience views the 3D film or video, whether it be through specially designed glasses or an auto-stereoscopic television.

At present the most popular way to view 3D media is with the use of specialised glasses, the most popular being, active shutter glasses, passive polarised glasses and colour separationbased glasses.

Wearing glasses to watch a movie is often mentioned as a negative aspect of 3D. There is a technology available that allows you to watch 3D on screens without wearing any additional glasses, it is called autostereoscopy, this will also be looked at.

The health impacts that result from watching 3D will also be examined, along with factors that will prevent a person from being able to correctly view 3D images.

There will be impacts on the entire industry from studios and cinemas to smaller production companies and independent producers if 3D films become the norm and these will be examined.

A good place to start this report is to examine how two of the highest profile media companies around at present are currently viewing 3D technology.

Phil McNally stereoscopic supervisor at Disney-3D and Dreamworks was quoted as saying,

‘…consider that all technical progress in the cinema industry brought us closer to the ultimate entertainment experience: the dream. We dream in colour, with sound, in an incoherent world with no time reference. The cinema offers us a chance to dream awake for an hour. And because we dream in 3D, we ultimately want the cinema to be a 3D experience not a flat one.'(Mendiburu, 2009)

In the BBC Research White Paper: The Challenges of Three-Dimensional Television, 3D technology is referred to as

‘…a continuing long-term evolution of television standards towards a means of recording, transmitting and displaying images that are indistinguishable from reality'(Armstrong, Salmon, & Jolly, 2009)

It is clear from both of these high profile sources that the industry is taking the evolution of 3D very seriously, as a result this is a topic that is not only very interesting but will be at the cutting edge of technological advances for the next couple of years.

This report will be covering the following things:

  • What does the term 3D mean with reference to film and video
  • A look at the history of 3D in film
  • How does 3D technology work
  • The implications of 3D on the film business and on cinemas
  • The methods used to create the media and also the ways in which the 3D image is recreated for the viewer

The reasons I have chosen to do my project on this topic is that I am very interested in the new media field. 3D video when accompanied with high definition film and video is a field that is growing rapidly. Earlier this year, on 02 April 2009, Sky broadcast the UK’s first live event in the 3D TV format, it featured a live music concert by the pop group Keane, it was sent via the company’s satellite network using polarisation technology.

Traditionally we view films and television in two dimensions, this in essence means we view the media as a flat image. In real life we view everything in three dimensions, this is because we get a slightly different image received in each eye, the brain then combines these and we can work out depth of vision and create a 3D image. (this will be explained further in Chapter 3)

There is a high level of industrial relevance with this topic, as 3D technology coupled with high definition digital signal is at the cutting edge of mainstream digital media consumption. Further evidence of this is that the sports company ESPN will be launching their new TV channel, ESPN-3D in North America in time for this year’s Summer Football World Cup.

In January 2009 the BBC produced a Research White Paper entitled The Challenges of Three-Dimensional Television on this subject and over the next couple of years they predict that it will start to be introduced in the same way that HD (High Definition) digital television signal is currently being phased in, with pay-per-view movies and sports being the first take advantage of it.

Sky have announced that their existing Sky+HD boxes will be able to broadcast the 3D signals so customers will not even need to update their equipment to be able to receive the 3D Channel that they are starting to broadcast later this year.

On Sunday January 31st 2010, Sky broadcast a live Premier League football match between Arsenal and Manchester United for the first time in 3D to selected pubs across the country, Sky equipped the selected pubs with LG’s new 47-inch LD920 3D TVs. These televisions use the passive glasses, similar to the ones uses in cinemas as opposed to the more expensive Active glasses which are also an option. (The differences between Active and Passive technologies will be explained in Chapter 8)

It is also worth noting that at the 2010 Golden Globe awards, on acceptance of his award for ‘Best Picture’ for the 3D Box Office Hit Avatar, the Canadian director James Cameron pronounced 3D as ‘the future’.

At the time of writing this report (27/01/2010) the 3D film Avatar has just taken over from Titanic (also a James Cameron film) to become the highest grossing movie of all time, with worldwide takings of $1.859 billion. This is being accredited to the films outstanding takings in the 3D version of its release, in America 80% of the films box office revenue has been received from the 3D version of its release.

In an industry where ‘money talks’, these figures will surely lead to an dramatic increase in production of 3D films and as a result Avatar could potentially be one of the most influential films of all time.

After completing this dissertation I hope to be able to have a wide knowledge base on the subject and hopefully this will appeal to companies that I approach about employment once I have graduated.

In the summer of 2010 when I will be looking for jobs, I believe that a lot of production companies will have some knowledge of 3D technology and be aware of how in the near future it may be something that they will have to consider adopting in the way that many production companies are already or soon will be adopting HD into their workflow.

In order to ensure that I complete this project to a high standard it is important that I gain a complete understanding of the topic and study a variety of different sources when compiling my research.

3D media itself is not a new concept so there are a wide range of books and articles on the theory of 3D and stereoscopy along with anaglyphs.

However in recent years there has been a resurgence in 3D with relation to film and TV. This is due mainly to digital video and film production making it easier and cheaper to create and manage the two channels needed for three-dimensional video production.

It has proved more difficult to study books and papers on this most recent resurgence of 3D because it is still happening and evolving all the time. I have read various research white papers on the subject, which have been cited in the Bibliography, I have also used websites and blogs along with some recently published books, one of the problems with such a fast moving technological field such as 3D though, is that these books quickly become outdated.

CHAPTER 2: HUMAN VISION

In the real world we see in three dimensions as opposed to the two dimensions that we have become accustomed to when watching TV or at the cinema. Human vision appears in three dimensions because it is normal for people to have two eyes that both focus on the object, in the brain these two images are then fused into one, from this we can work out depth of vision, this process is called stereopsis. All of these calculations happen in the brain without the person ever even noticing, as a result we see the world in three dimensions very naturally.

The reason that we see in 3D is because of stereoscopic depth perception. There are various complex calculations going on in our brains, this coupled with real experience allows our brain to work out the depth of vision. If it wasn’t for this it would be impossible to tell if something was very small or just very far away.

As humans, we have learnt to judge depth even with only one view point. This is why, if a person has one eye they can still manage to do most things that a person with two eyes can do. This is also why when watching a 2-D film you can still get a good judge of depth.

The term for depth cues based on only one viewpoint is monoscopic depth cues.

One of the most important of these is our own experience, it relates to perspective and relative size of objects. In simple terms, we have become accustomed to object being certain sizes. An example of this is that we expect buildings to be very big, humans are smaller and insects are smaller still. So this means that if we can see all three of these objects next to each other and they appear to be the same size then the insect must be much closer than the person, and both the insect and the person must be much closer that the building (see figure 1).

The perspective depth cue (shown in figure1) was backed up when an experiment was carried out by Ittelson in 1951. He got volunteers to look through a peep hole at some playing cards, the only thing they could see were the cards and so there were no other types of depth cue available. ‘There were actually three different-sized playing cards (normal size, half-size, and double size), and they were presented one at a time at a distance of 2.3metres away. The half-sized playing card was judged to be 4.6 metres away from the observer, whereas the double-sized card was thought to be 1.3 metres away. Thus, familiar size had a large effect on distance judgement'(Eysenck, 2002).

Another monoscopic depth cue that is very effective is referred to as occlusion or interposition. This is where an object overlaps another object. If a person is standing behind a tree then you will be able to see all of the tree but only part of the person. This tells us that the tree is nearer to us that the person.

One of the most important single view depth cues in called motion parallax, it works on the basis that if a person moves their head, and therefore eyes, then objects nearer to them, whilst not physically moving, will appear to move more than the objects in the distance. This is the method that astronomers use to measure distances of stars and planets. It is in extremely important method of judging depth and is used extensively in 3D filmmaking.

In filmmaking, lighting is often talked about as being one of the key elements to giving the picture ‘depth’, and this is because it is a monoscopic depth cue. In real life the main light source for millennia has been the sun. Humans have worked out how to judge depth based on the shadows that are portrayed from an object. In 2D films shadows are often used to display depth by casting them across actors faces it allows the viewers to see the recesses and expressions trying to be portrayed.

So far all of the methods that have been described for determining depth have been monoscopic, these work independently within each eye. If these were the only methods for determining depth there would be no need for 3D films as it would not add anything because all of these methods could be recreated using a single camera lens. This is not the case however, a lot of the more advanced methods used in human vision for judging depth need the use of both eyes, these are called stereoscopic depth cues.

A great deal of stereoscopic depth cues are based around the feedback that your brain gets when the muscles in the eye are manipulated to concentrate your vision on a particular point.

One of the main stereoscopic depth cues is called convergence, this referrers to the way that the eyes rotate in order to focus on an object (see figure 2).

If the focus is on a near object, the eyes rotate around the Y axis and converge on a tighter angle , similarly if the focus is on a distant object the rotation means the eyes have a wider angle of convergence.

It is a lot less stressful on the muscles in the eye to have a wide angle of convergence and look at objects far away, in comparison looking at very close object for any amount of time causes the muscles in the eye to ache. This is a very important factor that should be considered when creating 3D films, as it doesn’t matter how good the film is, if it is going to hurt the audience it will not go down well.

A second stereoscopic depth cue that we use is called accommodation, this is the way that our eyes changes focus when we look at an object at different distances, it is very closely linked with convergence.

Usually when we look at an object very close up, our eyes will change rotation and point towards the object (convergence) allowing us to look at the item, our eyes will at the same time change focus (accommodation). Using the ciliarybody muscles in the eye, the lens will change shape to let more or less light in the same way a camera does, thus changing focus.

In everyday life convergence and accommodation usually happen in parallel. The fact that we can, if we wish choose to converge our eyes without changing the focus means that 3D films are possible. When you are sat in the cinema all of the action is projected onto the screen in front of you, so this is where your eyes need to focus. With 2D films the screen is also where your eyes need to converge, but with 3D films this is not the case. When watching a 3D film the focus never changes from the screen, else the whole picture would go out of focus, but objects appear to be in front and behind the screen, so your eyes need to change their convergence to look at these objects without altering their focus from the screen.

It has been suggested that this independence of accommodation and convergence is the reason for eye strain when watching a 3D picture as your eyes are doing something that they are not in the habit of doing (see chapter 12: Is 3D bad for you).

It is also worth noting that our monoscopic depth cues work at almost any range, this is not the case with stereoscopic depth cues. As objects become further away they no longer appear differently in each eye, so there is no way the brain can calculate a difference and work out depth.

‘The limit occurs in the 100 to 200-yard range, as our discernment asymptomatically tends to zero. In a theatre, we will hit the same limitation, and this will define the “depth resolution” and the “depth range” of the screen’.(Mendiburu, 2009)

This means that when producing a 3D film you have to be aware that the range of 3D that you have to use is not infinite and is limited to 100-200 yards.

CHAPTER 3: Early Stereoscopic History (1838 – 1920)

Three dimensional films are not a new phenomenon, ‘Charles Wheatstone discovered, in 1838, that the mechanism responsible for human depth perception is the distance separating the retinas of our eyes .’ (Autodesk, 2008)

In a 12,000 word research paper that Wheatstone presented to the Royal Society of Great Britain he described ‘the stereoscope and claimed as a new fact in his theory if vision the observation that two different pictures are projected on the retinas of the eyes when a single object is seen’.(Zone, 2007)

Included in the paper were a range of line drawings presented as stereoscopic pairs, these were designed to be viewed in 3D using Wheatstones invention, the stereoscope.

Wheatstone was not the first person to look at the possibility of receiving separate views in each eye, ‘In the third century B.C, Euclid in his treatise on Optics observed that the left and right eyes see slightly different views of a sphere'(Zone, 2007). However, Wheatstone was the first person to create a device to be able to re-create 3D images.

Between 1835 and 1839 photography was starting to be developed thanks to work from William Fox Talbot, Nicephore Niepce and Louise Daguerre.

Once Wheatstone became aware of the photographic pictures that were available he requested some stereoscopic photographs to be made for him. Wheatstone observed that ‘it has been found advantageous to employ, simultaneously, two cameras fixed at the proper angular positions'(Zone, 2007).

This was the start of stereoscopic photography.

Between 1850 and 1860 work was starting to be done by various people to try and combine stereoscopic photography with machines that would display a series of images very quickly and therefore using persistence of vision to create a moving 3D image. These were the first glimpses of 3D motion.

In 1891 a French scientist, Louis Ducos du Hauron patented the anaglyph, a method for separating an image into two separate colour channels and then by wearing glassing with the same colours but on opposite eyes thereby cancelling out the image, thus reproducing one image, but in 3D.

Another method used at this time to create 3D was proposed by John Anderton, also in 1891. Anderton’s system was to use polarisation techniques to split the image into two separate light paths and then employ a similar polarisation technique to divert a separate image to each eye on viewing.

One of the main advantages of polarisation over anaglyphs is that they do not lose any colour information, this is due to the fact that both images retain the original colour spectrums. They do however loose luminance. It is common for a silver screen to be necessary, it serves two purposes, firstly the specially designed screen maintains the separate polarisation required for each image. It also reflects more light than conventional screens, this compensates for the loss of luminance.

During 1896 and 1897 2D motion pictures started to take off, and by 1910 after a lot of initial experimenting the creative formats of film that we recognise today such as cuts and framing had started to become evident.

In 1920 Jenkins, an inventor that worked hard to try and create a method for recreating stereoscopic motion picture was quoted as saying ‘Stereoscopic motion pictures have been the subject of considerable thought and have been attained in several ways…but never yet have they been accomplished in a practical way. By practical, I mean, for example without some device to wear over the eyes of the observer.'(Zone, 2007)

It is worth noting that this problem of finding a ‘practical’ method of viewing 3D has still to a large extent not been solved.

Chapter 4: Early 3D Feature Films

(1922 – 1950)

4.1 The first 3D feature film

The first 3D feature film, The Power of Love was released in 1922, it was exhibited at the Ambassador Hotel Theatre in Los Angeles. ‘Popular Mechanics magazine described how the characters in the film “did not appear flat on the screen, but seemed to be moving about in locations which had depth exactly like the real spots where the pictures were taken”‘(Zone, 2007).

The Power of Love was exhibited using red/green glasses using a dual strip anaglyph method of 3D projection. (Anaglyphs are explained in chapter 8.3)

The film was shot on a custom made camera invented by Harry K.Fairall, he was also the director on the film. ‘The camera incorporated two films in one camera body’.(Symmes, 2006)

Power of Love was the first film to be viewed using anaglyph glasses, also the first to use dual-strip projection.

Also in 1922, William Van Doren Kelley designed his own camera rig, based on the Prizma colour system which he had invented in 1913. The Prizma 3D colour method worked by capturing two different colour channels by placing filters over the lenses. This way he made his own version of the red/blue anaglyphic print. Kelleys ‘Movies of the Future’ was shown at Rivoli Theatre in New York City.

4.2 The first active-shutter 3D film

A year later in 1923 the first alternate-frame 3D projection system was unveiled. It used a technology called ‘Teleview’. Which blocked the left and right eyes periodically in sync with the projector, thereby allowing you to see too separate images.

Teleview was not an original idea, but up to this point no one had been able to get the theory to actually work in a practical way that would allow for films to be viewed in a cinema. This is where Laurens Hammond comes in.

Hammons designed a system where two standard projectors would be hooked up to their own AC generators, running at 60Hz this meant that adjusting the AC frequency would increase or decrease the speed of the projectors.

‘The left film was in the left projector and right film in the right. The projectors were in frame sync, but the shutters were out of phase sync.'(Symmes, 2006) This meant that the left image was shown, then the right image.

The viewing device was attached to the seats in the theatre. ‘It was mounted on a flexible neck, similar to some adjustable “gooseneck” desk lamps. You twisted it around and centred it in front of your face, kind of like a mask floating just in front of your face.’ (Symmes, 2006)

The viewing device consisted of a circular mask with a view piece for each eye plus a small motor that moved a shutter across in front of either the left or right eye piece depending on the cycle of current running through it. All of the viewing devices were powered by the same AC generator as the projectors meaning that they were all exactly in sync.

One of the major problems Hammond had to overcome was the fact that at the time film was displayed at 16 frames per second. With this method of viewing you are effectively halving the frame rate. 8 frames per second resulted in a very noticeable flicker.

To overcome this Hammond cut each frame up in to three flashes so the new ‘sequence was: 1L-1R-1L-1R-1L-1-R 2L-2R-2L-2R-2L-2R and so on. Three alternate flashes per eye on the screen.’ (Symmes, 2006)

This method of separating and duplicating certain frames effectively resulted in increasing the overall frame rate thereby eradicating the flicker.

There was only one film produced using this method, it was called M.A.R.S and displayed at the Selwyn Theatre in New York City in December 1922. The reason the technology didn’t catch on was not due to the image, as the actual theory for producing the image has changed very little from the Teleview method to the current active-shutter methods which will be explained later.

As with a lot of 3D methods the reason this one did not become mainstream was due the viewing apparatus that was needed. Although existing projectors could be modified by linking them up to separate AC generator, meaning no extra equipment was needed, the headsets that were required did need a lot of investment and time to install. All of the seats in the theatre needed to be fitted with headsets, these were adjusted in front of the audience members. These also had to be linked up to the AC generator so as they were perfectly in sync, this meant that they had to be wired in to the seats.

These problems have since been overcome with wireless technologies such as Bluetooth as will be explained later.

4.3 The first polarised 3D film

The next and arguably one of the most important advancements in 3D technology came in 1929 when Edwin H. Land worked out a way of using polarised lenses (Polaroid) together with images to create stereo vision. (Find more on polarisation in chapter 8.6)

‘Lands polarizing material was first used for projection of still stereoscopic images at the behest of Clarence Kennedy, an art history instructor at Smith College who wanted to project photo images of sculptures in stereo to his students’. (Zone, 2007)

In 1936 Beggar’s Wedding was released in Italy, it was the first stereoscopic feature to include sound, it was exhibited using Polaroid filters. This was filmed using polarised technology.

The first American film to use polarising filters was shot in 1939 and entitled In Tune With Tomorrow, it was a 15 minute short film which shows ‘through stop motion, a car being built piece-by-piece in 3D with the added enhancement of music and sound effects’. (Internet Movie Database, 2005)

Between 1939 and 1952 3D films continued to me made but with the Great Depression and the onset of the Second World War, the cinema industry was restricted with its output because of finances and as 3D films were more expensive to make their output started to be reduced.

Chapter 5: ‘Golden Age’ of 3D

(1952 – 1955)

‘With cinema ticket sales plummeting from 90 million in 1948 to 40 million in 1951’ (Sung, 2009) largely being put down to the television becoming coming in people’s front rooms the cinema industry needed to find a way to encourage the viewers back the big screen, 3D was seen as a way to offer something extra to make viewers return.

In 1952 the first colour 3D film was released called Bwana Devil,it was the first of many stereoscopic films to follow in the next few years. The process of combining 3D and colour attracted a new audience to 3D films.

Between 1950 and 1955 there were far more 3D films produced that at any other time before or since, apart from possibly in the next couple of years from 2009 onwards, as the cinema industry tries to fight back again against falling figures, this time though because of home entertainment systems, video-on-demand, and legal and illegal movie downloads.

Towards the end of the ‘Golden Age’, around 1955, the fascination with 3D was starting to be lost. There were a number of reasons for this, one of the main factors was that in order for the film to be seen in 3D it had to be shown on two reels at the same time, which meant that the two reels had to be exactly in time else the effect would be lost and it would cause the audience headaches.

Chapter 6: Occasional 3D films

(1960 – 2000)

Between 1960 and 2000 there were sporadic resurgences in 3D. These were down to new technologies becoming available.

In the late 1960’s the invention of a single strip 3D format initiated a revival as it meant that the dual projectors would no longer go out of sync and cause eye-strain. The first version of this single strip 3D format to be used was called Space-Vision 3D, it worked on an ‘over and under’ basis. This meant that the frame was horizontally split into two, during playback it was then separate in two using a prism and polarised glasses.

However, there were major drawbacks with Space-Vision 3D. Due to the design of the cameras required to film in this format, the only major lens that was compatible was the Bernier lens. ‘The focal length of the Bernier optic is fixed at 35mm and the interaxial at 65mm. Neither may be varied, but convergence may be altered'(Lipton, 1982).This obviously restricted the creative filmmaking options and as a result was soon superseded by a new format called Stereovision.

Stereovision was similar to Space-Vision 3D in that is split the frame in two, unlike Space-Vision though, the frame was split vertically, and they were placed side-by-side. During projection these frames were then put through an anamorphic lens, thereby stretching them back to their original size. These also made use of the polarising method introduced by Land in 1929.

A film made using this process was called The Stewardess, released in 1969, it cost only $100,000 to make but at the cinema it grossed $26,000,000 (Lipton, 1982). Understandably the studios were very interested in the profit margin that arose from this film. As a result 3D once again became an interesting prospect for studios.

Up until fairly recently films were still shot and edited using old film techniques (i.e. not digitally). This made manipulating 3D films quite difficult, this lack of control over the full process made 3D less appealing to film makers.

‘The digitisation of post-processing and visual effects gave us another surge in the 1990’s. But only full digitisation, from glass to glass – from the camera’s to projector lenses – gives 3D the technological biotope it needs to thrive’ (Mendiburu, 2009).

Chapter 7: The Second ‘Golden Age’

of 3D (2004 – present)

In 2003 James Cameron released Ghost of the Abyss, it was the first full length 3D feature film that used the Reality Camera System, which was specially designed to use new high definition digital cameras. These digital cameras meant that the old techniques used with 3D film no longer restricted the work-flow, and the whole process can be done digitally, from start to finish.

The next groundbreaking film was Robert Semecki’s 2004 animated film Polar Express which was also shown in IMAX theatres. It was released at the same time in 2D and 3D, the 3D cinemas took on average 14 times more money that the 2D cinemas.

The cinemas once again took note, and since Polar Express was released in 2004, 3D digital films have become more and more prominent.

IMAX are no longer the only cinemas capable of displaying digital 3D films. A large proportion of conventional cinemas have made the switch to digital, this switch has enabled 3D films to be exhibited in a large range of cinemas.

CHAPTER 8: 3D TECHNOLOGIES

8.1 – 3D capture and display methods

Each different type of stereoscopic display projects the combined left and right images together onto a flat surface, usually a television or cinema screen. The viewer then must have a method of decoding this image and separating the combined image into left and right images and relaying these to the correct eye. The method that is used to split this image is, in the majority of cases, a pair of glasses.

There are two brackets of encoding method, passive and active. Passive means that the images are combined into one and then the glasses split this image in to two separate images for left and right eye. In this method the glasses are cheaper to produce and the expense usually comes in the equipment used to project the image. The second method is active display. This method works by sending the alternative images in a very quick succession (L-R-L-R-L-R), the glasses then periodically block the appropriate eye piece, this is done at such a fast rate that it appears to be continuous in both eyes.

There are various different types of encoding encapsulated within each of the two methods mentioned above.

The encoding can use either colour separation (anaglyph, Dolby 3D), time separation (active glasses) or polarisation (RealD). A separate method, which does not require the use of glasses is done by using a virtual space in front of the screen and is called autosterescopic.

In cinemas across the world at the moment there are several formats that are used to display 3D films. Three of the main distributors are Real-D, iMAX and Dolby-3D.

Once a 3D film has been finished by the studios, it then needs to be prepared for exhibition in various different formats, this can include amongst other things colour grading and anti ghosting processes.

At present there is not a universally agreed format for capturing or playing back 3D films, as a result there are several different versions, these are explained below.

A large majority of the latest wave of 3D technology options send the image using one projector, so removing the old problem of out sync left and right images. The methods that do use dual projectors are much more sophisticated that the older versions used in anaglyphic films so have eradicated the old problems of out of sync projectors.

Strategies for Welding Aluminium

CHAPTER 1: INTRODUCTION

1.1 INTRODUCTION OF THE FSW TECHNIQUE

In today’s modern world there are many different welding techniques to join metals. They range from the conventional oxyacetylene torch welding to laser welding. The two general categories in which all the types of welding can be divided is fusion welding and solid state welding.

The fusion welding process involves chemical bonding of the metal in the molten stage and may need a filler material such as a consumable electrode or a spool of wire of the filler material, the process may also need a inert ambience in order to avoid oxidation of the molten metal, this could be achieved by a flux material or a inert gas shield in the weld zone, there could be need for adequate surface preparations, examples of fusion welding are metal inert gas welding (MIG), tungsten inert gas welding (TIG) and laser welding. There are many disadvantages in the welding techniques where the metal is heated to its melting temperatures and let it solidify to form the joint. The melting and solidification causes the mechanical properties of the weld to deteriorate such as low tensile strength, fatigue strength and ductility. The disadvantages also include porosity, oxidation, microsegregation, hot cracking and other microstructural defects in the joint. The process also limits the combination of the metals that can be joined because of the different thermal coefficients of conductivity and expansion of different metals.

The solid state welding is the process where coalescence is produced at temperatures below the melting temperatures of the base metal with out any need for the filler material or any inert ambience because the metal does not reach its melting temperature for the oxidation to occur, examples of solid state welding are friction welding, explosion welding, forge welding, hot pressure welding and ultrasonic welding. The three important parameters time, temperature and pressure individually or in combinations produce the joint in the base metal. As the metal in solid state welding does not reach its melting temperatures so there are fewer defects caused due to the melting and solidification of the metal. In solid state welding the metals being joined retain their original properties as melting does not occur in the joint and the heat affected zone (HAZ) is also very small compared to fusion welding techniques where most of the deterioration of the strengths and ductility begins. Dissimilar metals can be joined with ease as the thermal expansion coefficients and the thermal conductivity coefficients are less important as compared to fusion welding.

Friction stir welding (FSW) is an upgraded version of friction welding. The conventional friction welding is done by moving the parts to be joined relative to each other along a common interface also applying compressive forces across the joint. The frictional heat generated at the interface due to rubbing softens the metal and the soft metal gets extruded due to the compressive forces and the joint forms in the clear material, the relative motion is stopped and compressive forces are increased to form a sound weld before the weld is allowed to cool.

Friction stir welding is also a solid state welding processes; this remarkable upgradation of friction welding was invented in 1991 in The Welding Institute (TWI) [4]. The process starts with clamping the plates to be welded to a backing plate so that the plates do not fly away during the welding process. A rotating wear resistant tool is plunged on the interface between the plates to a predetermined depth and moves forward in the interface between the plates to form the weld. The advantages of FSW technique is that it is environment friendly, energy efficient, there is no necessity for gas shielding for welding Al, mechanical properties as proven by fatigue, tensile tests are excellent, there is no fume, no porosity, no spatter and low shrinkage of the metal due to welding in the solid state of the metal and an excellent way of joining dissimilar and previously unweldable metals.

1.2 ALUMINUM ALLOYS AND WELDING OF ALUMINUM ALLOYS

Aluminum is the most abundant metal available in the earths crust, steel was the most used metal in 19th century but Aluminium has become a strong competitor for steel in engineering applications. Aluminium has many attractive properties compared to steel it is economical and versatile to use that is the reason it is used a lot in the aerospace, automobile and other industries. The most attractive properties of aluminum and its alloys which make them suitable for a wide variety of applications are their light weight, appearance, frabricability, strength and corrosion resistance. The most important property of aluminum is its ability to change its properties in a very versatile manner; it is amazing how much the properties can change from the pure aluminum metal to its most complicate alloys. There are more then a couple of hundreds alloys of aluminum alloys and many are being modified form them internationally. Aluminium alloys have very low density compared to steel it has almost one thirds the density of steel. Properly treated alloys of aluminum can resist the oxidation process which steel can not resist; it can also resist corrosion by water, salt and other factors.

There are many different methods available for joining aluminum and its alloys. The selection of the method depends on many factors such as geometry and the material of the parts to be joined, required strength of the joint, permanent or dismountable joint, number of parts to be joined, the aesthetic appeal of the joint and the service conditions such as moisture, temperature, inert atmosphere and corrosion.

Welding is one of the most used methods for aluminum. Most alloys of aluminum are easily weldable. MIG and TIG are the welding processes which are used the most, but there are some problems associated with this welding process like porosity, lack of fusion due to oxide layers, incomplete penetration, cracks, inclusions and undercut, but they can be joined by other methods such as resistance welding, friction welding, stud welding and laser welding. When welding many physical and chemical changes occur such as oxide formation, dissolution of hydrogen in molten aluminum and lack of color change when heated.

The formation of oxides of aluminum is because of its strong affinity to oxygen, aluminum oxidizes very quickly after it has been exposed to oxygen. Aluminum oxide forms if the metal is joined using fusion welding processes, and aluminum oxide has a high melting point temperature than the metal and its alloys it self so it results in incomplete fusion if present when joined by fusion welding processes. Aluminum oxide is a electrical insulator if it is thick enough it is capable of preventing the arc which starts the welding process, so special methods such as inert gas welding, or use of fluxes is necessary if aluminum has to be welded using the fusion welding processes.

Hydrogen has high solubility in liquid aluminum when the weld pool is at high temperature and the metal is still in liquid state the metal absorbs lots of hydrogen which has very low solubility in the solid state of the metal. The trapped hydrogen can not escape and forms porosity in the weld. All the sources of hydrogen has to be eliminated in order to get sound welds such as lubricants on base metal or the filler material, moisture on the surface of base metal or condensations inside the welding equipment if it uses water cooling and moisture in the shielding inert gases. These precautions require considerable pretreatment of the workpiece to be welded and the welding equipment.

Hot cracking is also a problem of major concern when welding aluminum, it occurs due to the high thermal expansion of aluminum, large change in the volume of the metal upon melting and solidification and its wide range of solidification temperatures. The heat treatable alloys have greater amounts of alloying elements so the weld crack sensitivity is of concern. The thermal expansion of aluminum is twice that of steel, in fusion welding process the melting and cooling occurs very fast which is the reason for residual stress concentrations.

Weldability of some aluminum alloys is an issue with the fusion welding processes. The 2000 series, 5000 series, 6000 series and 7000 series of aluminum alloys have different weldabilities. The 2000 series of aluminum alloys have poor weldability generally because of the cooper content which causes hot cracking and poor solidification microstructure and porosity in the fusion zone so the fusion welding processes are not very suitable for these alloys. The 5000 series of aluminum alloys with more than 3% of Mg content is susceptible to cracking due to stress concentration in corrosive environments, so high Mg alloys of 5000 series of aluminum should not be exposed to corrosive environments at high temperatures to avoid stress corrosion cracking. All the 6000 series of aluminum are readily weldable but are some times susceptible to hot cracking under certain conditions. The 7000 series of aluminum are both weldable and non-weldable depending on the chemical composition of the alloy.

Alloys with low Zn-Mg and Cu content are readily weldable and they have the special ability of recovering the strength lost in the HAZ after some weeks of storage after the weld. Alloys with high Zn-Mg and Cu content have a high tendency to hot crack after welding. All the 7000 series of aluminum have the sensitivity to stress concentration cracking.

All these problems associated with the welding of these different alloys of aluminum has lead to the development of solid state welding processes like Friction Stir Welding technique which is an upgraded version of the friction welding processes. This process has many advantages associated with it, and it can weld many aluminum alloys such as 2000 and 7000 series which are difficult to weld by fusion welding processes. The advantages of the Friction Stir Welding processes are low distortion even in long welds, no fuse, no porosity, no spatter, low shrinkage, can operate in all positions, very energy efficient and excellent mechanical properties as proven by the fatigue, tension and bend tests.

1.3 Conventional Welding Processes of Aluminum

A brief description of the most common processes, their applications on aluminum and limitations are given below.

1.3.1 Gas Tungsten Arc Welding (GTAW):

In gas tungsten arc welding process the heat generated by an arc, which is maintained between the workpiece and a non-consumable tungsten, electrode is used to fuse the joint area. The arc is sustained in an inert gas, which serves to protect the weld pool and the electrode from atmospheric contamination as shown in Figure 2.3.

The process has the following features:

  • It is conducted in a chemically inert atmosphere;
  • The arc energy density is relatively high;
  • The process is very controllable;
  • Joint quality is usually high;
  • Deposition rates and joint completion rates are low.

The process may be applied to the joining of a wide range of engineering materials including stainless steel, aluminum alloys and reactive metals such as titanium. These features of the process lead to its widespread application in aerospace, nuclear reprocessing and power generation industries as well as in the fabrication of chemical process plant, food processing and brewing equipment.

1.3.2 Shielded metal arc welding (SMAW):

Shielded metal arc welding has for many years been one of the most common techniques applied to the fabrication of steels. The process uses an arc as the heat source but shielding is provided by gases generated by the decomposition of the electrode coating material and by the slag produced by the melting of mineral constituents of the coating. In addition to heating and melting the parent material the arc also melts the core of the electrode and thereby provides filler material for the joint. The electrode coating may also be used as source of alloying elements and additional filler material. The flux and electrode chemistry may be formulated to deposit wear- and corrosion-resistant layers for surface protection as shown in Figure 2.4.

Significant features of the process are:

  • Equipment requirement are simple;
  • A large range of consumables are available;
  • The process is extremely portable;
  • The operating efficiency is low;
  • It is labor intensive.

For these reasons the process has been traditionally used in structural steel fabrication, shipbuilding and heavy engineering as well as for small batch production and maintenance.

1.3.3 Plasma welding:

Plasma welding uses the heat generated by a constricted arc to fuse the joint area; the arc is formed between the tip of a non-consumable electrode and either the work piece or the constricting nozzle as shown in Figure 2.5. A wide range of shielding and cutting gases is used depending on the mode of operation and the application.

In the normal transferred arc mode the arc is maintained between the electrode and the work piece; the electrode is usually the cathode and the work piece is connected to the positive side of the power supply. In this mode a high energy density is achieved and the process may be used effectively for welding and cutting.

The features of the process depend on the operating mode and the current, but in summary the plasma process has the following characteristics:

  • Good low-current arc stability
  • Improved directionality compared with GTAW
  • Improved melting efficiency compared with GTAW
  • Possibility of keyhole welding

The keyhole technique is the high heat concentration can penetrate completely through the joint.

These features of the process make it suitable for a range of applications including the joining of very thin materials, the encapsulation of electronic components and sensors, and high- speed longitudinal welds on strip and pipe.

1.3.4 Laser welding

The laser may be used as an alternative heat source for fusion welding. The focused power density of the laser can reach 1010 or 1012 Wm-2 and welding is often carried out using the ‘keyhole’ technique.

Significant features of laser welding are:

  • Very confined heat source at low power
  • Deep penetration at high power
  • Reduced distortion and thermal damage
  • Out-of-vacuum technique
  • High equipment cost

These features have led to the application of leaders for micro joining of electronic components, but the process is also being applied to the fabrication of automotive components and precision machine tool parts in heavy section steel.

1.4 Weld Defects using Conventional Processes

Because of a history of thermal cycling and attendant micro structural changes, a welded joint may develop certain discontinuities. Welding discontinuities can also be caused by inadequate or careless application of established welding technologies or substandard operator training. The major discontinuities that affect weld quality are described below.

1.4.1 Porosity:

Trapped gases released during melting of the weld area and trapped during solidification, chemical reactions during welding, or contaminants, cause porosity in welds. Most welded joints contain some porosity, which is generally spherical in shape or in the form of elongated pockets. The distribution of porosity in the weld zone may be random, or it may be concentrated in a certain region. Porosity in welds can be reduced by the following methods:

  • Proper selection of electrodes and filler metals.
  • Improving welding techniques, such as preheating the weld area or increasing the rate of heat input.
  • Proper cleaning and preventing contaminants from entering the weld zone.
  • Slowing the welding speed to allow time for gas to escape.8

1.4.2 Slag inclusions:

Slag inclusions are compounds such as oxides, fluxes, and electrode-coating materials that are trapped in the weld zone. If shielding gases are not effective during welding, contamination from the environment may also contribute to such inclusions. Welding conditions are important, and with proper techniques the molten slag will float to the surface of the molten weld metal and not be entrapped. Slag inclusions may be prevented by:

  • Cleaning the weld-bead surface before the next layer is deposited by using a hand or power wire brush.
  • Providing adequate shielding gas.
  • Redesigning the joint to permit sufficient space for proper manipulation of the puddle of molten weld metal.

1.4.3. Incomplete fusion and penetration:

A better weld can be obtained by:

  • Raising the temperature of the base metal.
  • Cleaning the weld area prior to welding.
  • Changing the joint design and type of electrode.
  • Providing adequate shielding gas.

Incomplete occurs when the depth of the welded joint is insufficient. Penetration can be improved by:

  • Increasing the heat input.
  • Lowering travel speed during welding.
  • Changing the joint design.
  • Ensuring that surfaces to be joined fit properly.8

1.4.4 Weld profile:

Weld profile is important not only because of its effects on the strength and appearance of the weld, but also because it can indicate incomplete fusion or the presence of slag inclusions in multiple-layer welds. Under filling results when the joint is not filled with the proper amount of weld metal Figure 2.7. Undercutting results from melting away the base metal and subsequently generating a groove in the shape of recess or notch. Unless it is not deep or sharp, an undercut can act as a stress raiser and reduce the fatigue strength of the joint and may lead to premature failure. Overlap is a surface discontinuity generally caused by poor welding practice and selection of the wrong materials. A proper weld is shown in Figure 2.7c.5

1.4.5 Cracks:

Cracks may occur in various locations and direction in the weld area. The types of cracks are typically longitudinal, transverse, crater, and toe cracks Figure 2.8. These cracks generally result from a combination of the following factors:

  • Temperature gradients that cause thermal stresses in the weld zone.
  • Variations in the composition of the weld zone that cause different contractions.
  • Embitterment of grain boundaries by segregation of elements, such as sulfur, to the grain boundaries as the solid-liquid boundary moves when the weld metal begins to solidify.
  • Hydrogen embitterment.
  • Inability of the weld metal to contract during cooling is a situation similar to hot tears that develops in castings and related to excessive restraint of the work piece.

(a) crater cracks. (b)Various types of cracks in butt and T joints.8

Cracks are classified as hot or cold cracks. Hot cracks occur while the joint is still at elevated temperatures. Cold cracks develop after the weld metal has solidified. Some crack prevention measures are:

  1. Change the joint design to minimize stresses from shrinkage during cooling.
  2. Change welding-process parameters, procedures, and sequence.
  3. Preheat components being welded.
  4. Avoid rapid cooling of the components after welding.8

1.4.6 Lameller tears:

In describing the anisotropy of plastically deformed metals, we stated that because of the alignment of nonmetallic impurities and inclusions (stringers), the work piece is weaker when tested in its thickness direction. This condition is particularly evident in rolled plates and structural shapes. In welding such components, lamellar tears may develop because of shrinkage of the members in the members or by changing the joint design to make the weld bead penetrate the wearer member more deeply.8

1.4.7 Surface damage:

During welding, some of the metal may spatter and be deposited as small droplets on adjacent surfaces. In arc welding possess, the electrode may inadvertently contact the parts being welded at places not in the weld zone (arc strikes). Such surface discontinuities may be objectionable for reasons of appearance or subsequent use of the welded part. If severe, these discontinuities may adversely affect the properties of the welded structure, particularly for notch-sensitive metals. Using proper welding techniques and procedures is important in avoiding surface damage.8

1.5 Skill and Training requirements:

Many of the traditional welding processes required high levels of operator skill and dexterity, this can involve costly training programs, particularly when the procedural requirement described above need to be met. The newer processes can offer some reduction in the overall skill requirement but this unfortunately been replaced in some cases by more complex equipment and the time involved in establishing the process parameters has brought about a reduction in operating factor. Developments, which seek to simplify the operation of the equipment, will be described below but effective use of even the most advanced processes and equipment requires appropriate levels of operator and support staff training. The cost of this training will usually be recovered very quickly in improved productivity and quality.

1.6 Areas for development:

Advances in welding processes may be justified in:

  • Increased deposition rate;
  • Reduced cycle time;
  • Improved process control;
  • Reduced repair rate;
  • Reduced weld size;
  • Reduced joint preparation time;
  • Improved operating factor;
  • Reduction in post-weld operations;
  • Reduction in potential safety hazards;
  • Removal of the operator from hazardous area;
  • Simplified equipment setting.

Some or all these requirement have been met in many of the process developments which have occurred in the ten years; these will be described in detail in the following chapters but the current trends in the of this technology are examined below.

1.7 New processes:

The Primary incentive for welding process development is the need to improve the total cost effectiveness of joining operations in requirement for new processes. Recently, concern over the safety of the welding environment and the potential shortage of skilled technicians and operator in many countries have become important considerations.

Many of the traditional welding techniques described in this Chapter are regarded as costly and hazardous and it is possible to improve both of these aspects significantly by employing some of the advanced process developments described in the following chapters.

The use of new joining techniques such as Friction Stir Welding appears to be increasing since it does not involve melting. The application of these processes has in the past been restricted, but with the increased recognition of the benefits of automation and the requirement for high-integrity joints in newer materials it is envisaged that the use of these techniques will grow.

This is a new process originally intended for welding of aerospace alloys, especially aluminum extrusions. Whereas in conventional friction welding, heating of interfaces is achieved through friction by rubbing two surfaces, in the FSW process, a third body is rubbed against the two surfaces to be joined in the form of a small rotating non-consumable tool that is plunged into the joint. The contact pressure causes frictional heating. The probe at the tip of the rotating tool forces heating and mixing or stirring of the material in the joint.

1.8 Research objectives:

The objectives of our project are to:

  • Adopt FSW to a milling machine
  • Design the FSW tools, select its material and have it manufactured
  • Design the required clamping system
  • Apply FSW to plates of an alloy that is not readily weldable by conventional methods
  • Investigate FSW parameters (RPM, Feed Rate and Axial force)
  • Analyze conventionally welded and Friction Stir welded sections then compare their properties.

The objective of this research is to characterize the mechanical properties of friction stir welded joints and study the micro structure of the base metal and the weld nugget evolved during the friction stir welding of similar and dissimilar alloys of Aluminum.

Aluminum 2024 and 7075 are considered for this investigation. The mechanical properties such as ultimate tensile strength, yield strength, formability, ductility and vicker’s hardness are measured and an effort is made to find out a relation between the process variables and properties of the weld. The optimal process parameters for the Friction-Stir welding of AA2024 and AA7075 will be defined based on the experimental results.

Having understood the significance of FSP, the main objective of this thesis is to investigate the effect of process parameters like rotational and translational speeds on the forces generated during FSP of aluminum alloys and relate these forces with the microstructure evolved in order to optimize the process.

The specific objectives of the work presented are:

  • Design and conduct FS processing experiments on aluminum alloy for different combinations of rotational and translation speeds.
  • Measuring the generated processing forces during FSP of aluminum alloys
  • Examine the microstructural of the processed sheets using transmission electron microscope (TEM).
  • Attempt to establish a correlation between these measured forces and the resulting microstructure.

Chapter 2 Review of Literature

2.1 General Idea of the Friction Stir Technology

This section gives an insight into the innovative technology called friction stir technology.

The action of rubbing two objects together causing friction to provide heat is one dating back many centuries as stated by Thomas et.al [1]. The principles of this method now form the basis of many traditional and novel friction welding, surfacing and processing techniques. The friction process is an efficient and controllable method of plasticizing a specific area on a material, and thus removing contaminants in preparation for welding, surfacing/cladding or extrusion. The process is environmentally friendly as it does not require consumables (filler wire, flux or gas) and produces no fumes. In friction welding, heat is produced by rubbing components together under load. Once the required temperature and material deformation is reached, the action is terminated and the load is maintained or increased to create a solid phase bond. Friction is ideal for welding dissimilar metals with very different melting temperatures and physical properties. Some of the friction stir technologies are shown in the Fig.2-1.

Work carried out at TWI by Thomas et.al [2,3] has demonstrated that several alternative techniques exist or are being developed to meet the requirement for consistent and reliable joining of mass production aluminum alloy vehicle bodies. Three of these techniques (mechanical fasteners, lasers and friction stir welding) are likely to make an impact in industrial processing over the next 5 years. FSW could be applied in the manufacture of straight-line welds in sheet and extrusions as a low cost alternative to arc welding (e.g. in the fabrication of truck floors or walls). The development of robotized friction stir welding heads could extend the range of applications into three dimensional components.

Mishra et.al [4] extended the FSW innovation to process Al 7075 and Al 5083 in order to render them superplastic. They observed that the grains obtained were recrystallized, equiaxed and homogeneous with average grain sizes <5µm. They had high angles of misorientation ranging from 200 to 600. They had also performed high temperature tensile testing in order to understand the superplastic behavior of FSP aluminum sheets.

Metal matrix composites reinforced with ceramics exhibit high strength, high elastic modulus and improved resistance to wear, creep and fatigue compared to unreinforced metals. Mishra et al. [5] experimented and proved that surface composites could be fabricated by friction stir processing. Al¯SiC surface composites with different volume fractions of particles were successfully fabricated. The thickness of the surface composite layer ranged from 50 to 200µm. The SiC particles were uniformly distributed in the aluminum matrix. The surface composites have excellent bonding with the aluminum alloy substrate. The micro hardness of the surface composite reinforced with 27 volume % SiC of 0.7 µm average particle size was ~173 HV, almost double of the 5083Al alloy substrate (85 HV). The solid-state processing and very fine microstructure that results are also desirable for high performance surface composites.

Thomas et al. [6] presented a review of friction technologies for stainless steel, aluminum, and stainless steel to aluminum, which are receiving widespread interest. Friction hydro pillar processing, friction stir welding (FSW), friction plunge welding are some of these unique techniques. They observed that this technology made possible the welding of unweldable aluminum alloys and stainless steel feasible. Using this technology sheets up to 75mm thickness can also be easily welded.

2.2 Process parameters and properties during FSW

In order to optimize any process it is very essential to understand the effect of process parameters on the properties of the processed material. Hence this section gives an overview of such investigation in the field of friction stir welding process.

The effect of tool geometry and process parameters are very important factors to be considered for controlling friction stir welding process. Reynolds et.al. [20] made an attempt to study the effects of tool geometry and process parameters like rotational and translational speeds on the properties of welds by investigating x-axis force and power. The highest energy per unit weld length was observed in Al 6061 welds. It was also observed that the required x-axis force increased and the weld energy decreased with increasing welding speeds for all the Al alloys except for Al 6061alloy because of the relatively high thermal conductivity.

Kwon et.al [21] studied the FS processed Al 1050 alloy. The hardness and tensile strength of the FS processed 1050 aluminum alloy were observed to increase significantly with decreased tool rotation speed. It was noted that, at 560 rpm, these characteristics seemed to increase as a result of grain refinement by up to 37% and 46% respectively compared to the starting material.

In order to demonstrate the FSW of the 2017-T351 aluminum alloy and determine optimum welding parameters, the relations between welding parameters and tensile properties of the joints have been studied by Liu et.al. [22]. The experimental results showed that the tensile properties and fracture locations of the joints are significantly affected by the welding process parameters. When the optimum revolutionary pitch is 0.07 mm/rev corresponding to the rotation speed of 1500 rpm and the welding speed of 100 mm/min, the maximum ultimate strength of the joints is equivalent to 82% that of the base material. Though the voids-free joints were fractured near or at the interface between the weld nugget and the thermo-mechanically affected zone (TMAZ) on the advancing side, t

Development of VLSI Technology

CHAPTER 1

1. INTRODUCTION

The VLSI was an important pioneer in the electronic design automation industry. The “lambda-based” design style which was advocated by carver mead and Lynn Conway offered a refined packages of tools.. VLSI became the early hawker of standard cell (cell-based technology). Rapid advancement in VLSI technology has lead to a new paradigm in designing integrated circuits where a system-on-a-chip (SOC) is constructed based on predesigned and pre-verified cores such as CPUs, digital signals processors, and RAMs. Testing these cores requires a large amount of test data which is continuously increasing with the rapid increase in the complexity of SOC. Test compression and compaction techniques are widely used to reduce the storage data and test time by reducing the size of the test data.

The Very large scale integration design or manufacturing of extremely small uses complex circuitry of modified semiconductor material.

In 1959- jack St. Claire Kilby (Texas instruments) – they developed the first integrated circuit of 10 components on 9 mm2. In 1959, Robert Norton Noyce (founder, Fairchild semiconductor) has improved this integrated circuit which has been developed by Jack St & Claire Kilby, in 1968- Noyce, Gordon E. Moore found Intel, in 1971- Ted Hoff (Intel) – has developed the first microprocessor (4004) consists of 2300 transistors on 9 mm2, since then the continuous improvement in technology has allowed for increased performance as predicted by Moore’s law.

The rate of development of VLSI technology has historically progressed hand-in-hand with technology innovations. Many conventional VLSI systems as a result have engendered highly specialized technologies for their support. Most of the achievements in dense systems integration have derived from scaling in silicon VLSI process. As manufacturing has improved, it has become more cost-effective in many applications to replace a chip set with a monolithic IC: package costs are decreased, interconnect path shrink, and power loss in I/O drivers is reduced. As an example consider integrated circuit technology: the semi conductor industry Association predicts that, over the next 15 years, circuit technology will advance from the current four metallization layers up to seven layers. As a result, the phase of circuit testing in the design process is moving to the head as a major problem in VLSI design. In fact, Kenneth M, Thompson, vice president and general manager of the Technology, Manufacturing, and Engineering Group for Intel Corporation, states that a major falsehood of testing is that “we have made a lot progress in testing” in reality it is very difficult for testing to keep speed with semi conductor manufacturing technology.

Today’s circuits are expected to perform a very broad range of functions as it also meets very high standards of performance, quality, and reliability. At the same time practical in terms of time and cost.

1.1 Analog & Digital Electronics

In science, technology, business, and, in fact, most other fields of endeavor, we are constantly dealing with quantities. In the most physical systems, quantities are measured, monitored, recorded, manipulated, arithmetically, observed. We should be able to represent the values efficiently and accurately when we deal with various quantities. There are basically two ways of representing the numerical value of quantities: analog and digital

1.2 Analog Electronics

Analogue/Analog electronics are those electronic systems with a continuously variable signal. In contrast, two different levels are usually taken in digital electronics signals. In analog representation a quantity is represented by a voltage, current, or meter movement that is comparative to the value of that quantity. Analog quantities such as those cited above have n important characteristic: they can vary over a continuous range of values.

1.3 Digital Electronics

In digital representation the quantities are represented not by proportional quantities but by symbols called digits. As an example, consider the digital watch, which provides the time of day in the form of decimal digits which represent hours and minutes (and sometimes seconds). As we know, the time of day changes continuously, but the digital watch reading does not change continuously; rather, it changes in steps of one per minute (or per second). In other words, this digital representation of the time of day changes in discrete steps, as compared with the representation of time provided by an analog watch, where the dial reading changes continuously.

Digital electronics that deals with “1s and 0s”, but that’s a vast oversimplification of the in and outs of going digital. Digital electronics operates on the premise that all signals have two distinct levels. Certain voltages might be the levels near the power supply level and ground depending on the type of devices used. The logical meaning should not be mixed with the physical signal because the meaning of this signal level depends on the design of the circuit. Here are some common terms used in digital electronics:

  • Logical-refers to a signal or device in terms of its meaning, such as “TRUE” or “FALSE”
  • Physical-refers to a signal in terms of voltage or current or a device’s physical characteristics
  • HIGH-the signal level with the greater voltage
  • LOW-the signal level with the lower voltage
  • TRUE or 1-the signal level that results from logic conditions being met
  • FALSE or 0-the signal level that results from logic conditions not being met
  • Active High-a HIGH signal indicates that a logical condition is occurring
  • Active Low-a LOW signal indicates that a logical condition is occurring
  • Truth Table-a table showing the logical operation of a device’s outputs based on the device’s inputs, such as the following table for an OR gate described as below

1.4 Number Systems

Digital logic may work with “1s and 0s”, but it combines them into several different groupings that form different number systems. Most of are familiar with the decimal system, of course. That’s a base-10 system in which each digit represents a power of ten. There are some other number system representations,

  • Binary-base two (each bit represents a power of two), digits are 0 and 1, numbers are denoted with a ‘B’ or ‘b’ at the end, such as 01001101B (77 in the decimal system)
  • Hexadecimal or ‘Hex’-base 16 (each digit represents a power of 16), digits are 0 through 9 plus A-B-C-D-E-F representing 10-15, numbers are denoted with ‘0x’ at the beginning or ‘h’ at the end, such as 0x5A or 5Ah (90 in the decimal system) and require four binary bits each. A dollar sign preceding the number ($01BE) is sometimes used, as well.
  • Binary-coded decimal or BCD-a four-bit number similar to hexadecimal, except that the decimal value of the number is limited to 0-9.
  • Decimal-the usual number system. Decimal numbers are usually denoted by‘d’ at the end, like 24d especially when they are combined with other numbering systems.
  • Octal-base eight (each digit represents a power of 8), digits are 0-7, and each requires three bits. It is rarely used in modern designs.

1.5 Digital Construction Techniques

Building digital circuits is somewhat easier than for analog circuits-there is fewer components and the devices tend to be in similarly sized packages. Connections are less susceptible to noise. The trade-off is that there can be many connections, so it is easy to make a mistake and harder to find them. There are a few visual clues as result of uniform packages.

1.5.1 Prototyping Boards

Prototypes is nothing but putting together some temporary circuits, or, as part of the exercises using a common workbench accessory known as a prototyping board. A typical board is shown in Figure 1 with a DIP packaged IC plugged into the board across the centre gap. This board contains sets of sockets in rows which are connected mutually for the component leads to be connected and plugged in without soldering. Apart from these outer edges of the board which contains long rows of sockets are also connected together so that they can be used for ground connections and power supply which are common to most components.

Assembling wiring layout on the prototype board should be carried out systematically, similar to the schematic diagram shown.

1.5.2 Reading Pin Connections

IC pins are almost always arranged so that pin 1 is in a corner or by an identifying mark on the IC body and the sequence increases in a counter-clockwise sequence looking down on the IC or “chip” as shown in Figure 1. In almost all DIP packages, the identifying mark is a dot in the corner marking pin 1. Both can be seen in the diagram, but on any given IC only one is expected to be utilised.

1.5.3 Powering Digital Logic

Where analog electronics is usually somewhat flexible in its power requirements and tolerant of variations in power supply voltage, digital logic is not nearly so carefree. Whatever logic family you choose, you will need to regulate the power supply voltages to at least ±5 percent, with adequate filter capacitors to filter out sharp sags or spikes.

To provide references to the internal electronics that sense the low or high voltages and also act on them as logic signals, the logic devices rely on stable power supply voltages. The device could be confused and also misinterpret the inputs if the device’s ground voltage is kept away from 0 volts, which in turn causes temporary changes in the signals, popularly known as glitches. It is better to ensure that the power supply is very clean as the corresponding outcome can be very difficult to troubleshoot. A good technique is to connect a 10 ~ 100 µF electrolytic or tantalum capacitor and a 0.1 µF ceramic capacitor in parallel across the power supply connections on your prototyping board.

CHAPTER 2

2. REVIEW AND HISTORICAL ANALYSIS OF ITERATIVE CIRCUITS

As a background research, recent work on iterative circuits was investigated. In this section, seven main proposals from the literature will be reviewed. The first paper by Douglas Lewin published in (1974, pg.76,277), titled – Logic Design of Switching Circuits, in this book he states that quite often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. He stated that for a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth table method can lead to a laborious and inelegant solution.

2.1 Example:

A logic system could be decomposed into a number identical sub-systems, then if we could produce a design for the sub-system, or cell, the complete system could be synthesized by cascading these cells in series. The outputs of one cell form the inputs to the next one in the chain and so on, each cell is identical except for the first one (and frequently he last one) whose cell inputs must be deduced from the initial conditions. Each cell has external inputs as well as inputs from the preceding cell, which are distinguished by defining the outputs of a cell as its state. Figure 2.1 – Iterative Switching Systems

The second proposal which will b reviewed was presented by Fredrick J. Hil and Gerald R. Peterson published in (1981, pg. 570), titled – Introduction to Switching Theory and Logic Design, in this book, they discussed that iterative network is highly repetitive form of a combinational logic network. The repetitive structure make possible to describe the iterative networks utilizing techniques that already developed for sequential circuits, the author in this books he has limited his discussion to one dimensional iterative networks represented by the cascade or identical cells given in below figure. A typical cell with appropriate input and output notation is given in one more figure below (b). Now note the two distinct types of inputs, i.e., primary inputs from the outside world and secondary inputs from the previous cell in the cascade. And similarly and there are two types of outputs, i.e., primary to the outside world and secondary to the next cell in the cascade. The boundary inputs which are at the left of the cascade denoted by us in the same manner as secondary inputs. At some cases the inputs will be constant values.

A set of boundary inputs emerges from the right most cell in the cascade. although these outputs are to the outside world, they will be labelled in the same manners secondary outputs. The boundary outputs will be the only outputs of the iterative networks.

The third proposal by Barri Wilkinson with Raffic Makki, published in (1992, pg. 72-4) titled -digital design principles, in this book, they discussed about the design and problems of iterative circuits and stated that, there are some design problems which would require a large number of gates if designed as two level circuits. On approach i.e., is to divide each function into a number of identical sub functions which need be performed in sequence and the result of one sub function is used in the next sub function. A design based around the iterative approach is shown in below figure. There are seven logic circuit cells each cell accepts one code word digit and the output from the preceding cell. The cell produces one output, Z, which is a 1 whenever the number of 1’s on the two inputs is odd. Hence successive outputs are a 1 when the number of 1’s on inputs to that point is odd and the final output is a 1 only when the number of 1’s in the whole code word is odd as required.

To create an iterative design, the number of cells and the number of data inputs to each cell need to be determined and also the number of different states that must be recognized by the cell. The number of different states will define the number of lines to the next cell (usually carrying binary encoded information).

The fourth proposal was reviewed by Douglas Lewin and David Protheroe published in (1992, pg. 369),titled – Design of Logic systems, in this book, according to them, iterative networks were widely used in the early days of switching systems when relays were the major means of realizing logic circuits. these technique fell into disuse when electronic logic gates widely available. It is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in the sense ‘regular’ in that the overall function may be achieved by performing the same operation up to a sequence of a data bits. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with large numbers of parallel inputs.

The method is also directly applicable to the design of VLSI circuits and has the advantage of producing a modular structure based on a standard cell which may be optimized independently in terms of layout etc. Circuits containing any number of input variables can easily be constructed by simply extending the network with more cells. they examine the iterative circuits with some examples, although it is possible to implement an arbitrary logic function in the form of an iterative array, the technique is most often applied to functions which are in this sense ‘regular’ in that the overall function may be achieved by performing the same operation upon a sequence of data bits.

Suppose a logic system could be decomposed into a number of identical subsystems; then if we could produce a design for the subsystem, or cell, the complete system could be synthesized by cascading these cells in series. Problem Reduced: this problem now has been reduced to that of specifying and designing the cell, rather than the complete system.

The fifth proposal presented by Brians Holdsworth published in (1993, pg. 165-166) titled – Digital Logic Design, stated that iterative networks widely used before the introduction of electronic gates are again of some interest to the logic designers as a result of developments in semiconductor technology. Moss pass transistors which are easily fabricated are used in LSI circuits where these LSI circuits require less space and allow higher packing densities. One of the major disadvantages of hard-wired iterative networks was the long propagation delays because of the time taken for signals to ripple through a chain of iterated cells. This is no longer such a significant disadvantage since of the length of the signal paths on an LSI chip are much reduced in comparison with the hard-wired connections between SSI and MSI circuits. However, the number of pass transistors that can be connected in series is limited because of signal degradation and it is necessary to provide intercell buffers to restore the original signal levels. One additional advantage is the structural simplicity and the identical nature of the cells which allows a more economical circuit layout.

A book proposed by Brians Holdsworth and R.C. Woods published in (2002, pg.135), titled – Digital Logic Design, in this book, the discussion on the structure has made and stated that iterative network consists of number of identical cells interconnected in a regular manners as shown in figure with the variables X1……….Xn are termed as primary input signals while the output signals termed as Z1……………Zn and another variable is also taken a1…………an+1 are termed as secondary inputs or outputs depending on whether these signals are entering or leaving a cell. The structure of an iterative circuit may be defined as one which receives the incoming primary data in parallel form where each cell process the incoming primary and secondary data and generates a secondary output signal which is transmitted to the next cell. Secondary data is transmitted along the chain of cells and the time taken to reach steady state is determined by the delay times of the individual cells and their interconnections.

According to Larry L. Kinney, Charles .H and Roth. JR, published in (2004, pg.519) titled – Fundamentals of Logic design, in this book they discussed that many design procedures used for sequential circuits can be applied to the design of the iterative circuits, they consists of number of identical cells interconnected in a regular manner. Some operations such as binary addition, naturally lend themselves to realization with an iterative circuit because of the same operation is performed on each pair input bits. The regular structure of an iterative circuit makes it easier to fabricate in integrated circuit from than circuits with less regular structures, the simplest form of a iterative circuit consists of a linear array of combinational cells with signals between cells travelling in only one direction, each cell is a combinational circuit with one or more primary inputs and possibly one or more primary outputs. In addition, each cell has one or more secondary inputs and one or more secondary outputs. Then the produced signals carry information about the “state” of one cell to the next cell. The primary inputs to the cells are applied in parallel; that is, they are applied at the same time, the signals then propagate down the line of cells. Because the circuit is combinational, the time required for the circuit to reach a steady- state condition is determined only by the delay times of the gates in the cell. As soon as steady state is reached, the output may be read. Thus, the iterative circuits can function as a parallel- input, parallel-output device, in contrast with the sequential circuit in which the input and output are serial. One can think of the iterative circuits as receive its inputs as a sequence in time.

Example: parallel adder is an example of iterative circuits that has four identical cells. The serial adder uses the same full adder cell as he parallel adder, but it receives its inputs serially and stores the carry in a flip-flop instead of propagating it from cell to cell.

The final proposal was authored by JOHN F WAKERLY, published in (2006, pg. 459, 462, 756), titled – Digital Design Principles, in this book he quoted that, iterative circuits is a special type of combinational circuits, with the structure shown in below figure. This circuit contains n identical modules, each of which contains both primary inputs and primary outputs and cascading inputs and cascading outputs. The left most cascading inputs which is shown in below figure are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and these cascading output provides important information. Iterative circuits are well suited to problems that can be solved by a simple iterative algorithm:

  1. Set C0 to its initial value and set i=0
  2. Use Ci and Pli to determine the values of P0i and Ci+1.
  3. Increment i.
  4. If i

In an iterative circuit, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

Each of the works reviewed makes an important contribution to improve the disadvantages and problems by iterative circuits, which is lead to improving the iterative circuits, thus it is appealing me to pursue an investigation on the sequential circuits for better understanding about the iterative circuits

CHAPTER 3

3. OVERVIEW OF DESIGN METHODS FOR ITERATIVE CIRCUITS

3.1 Iterative design

Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Changes and refinements are made, in the most recent iteration of a design, based on the results of testing. The quality and functionality design can be improved by this process. The interaction with the designed system is used as a research for informing and evolving a project, as successive versions in Iterative design.

3.2 Iterative Design Process

The iterative design process may be applied throughout the new product development process. In the early stages of development changes are easy and affordable to implement. In the iterative design process the first is to develop a prototype. In order to deliver non-biased opinions the prototype should be examined by a focus group which is not associated with the product. The Information gained from the focus group should be integrated and synthesized into next stage of iterative design. This particular process must be recurred until an acceptable level is achieved for the user. Figure 3.1 Iterative Design Process

3.3 Iterative Circuits

Iterative Circuits may be classified as,

  • Combinational Circuits
  • Sequential Circuits.

Combinatorial circuit generalized using gates has m inputs and n outputs. This circuit can be built as n different combinatorial circuits, apiece with exactly one output. If the entire n-output circuit is constructed at once then some important sharing of intermediate signals may take place. This sharing drastically decreases the number of gates needed to construct the circuit.

In some cases, we might be interested to minimize the number of transistors. In other, we might want a little delay, or we may need to reduce the power consumption. Normally a mixture of such type must be applied.

In combinational logic design, the technique of expressing oral statements for a logic circuit in the form of a truth table is inadequate. For a simple network, a terminal description will often suffice, but for more complex circuits, and in particular when relay logic is to be employed, the truth method can lead to laborious and inelegant solutions. Iterative cell techniques are particularly well suited to pattern recognition and encoding and decoding circuits with a large number of parallel inputs, circuits specification is simplified and large variable problems reduced to a more tractable size, this method is directly applicable to the design of VLSI circuits. It should be pointed out though that the speed of the circuit is reduced because of the time required for the signals to propagate along the network; the number of interconnections is also considerably increased. In general, iterative design does not necessarily result in a more minimal circuit. As the advantage of producing a modular structure, circuits containing any number of input variables can be easily constructed by simple extending the networks with more cells. Suppose for example a logic system could be decomposed into number of identical sub subsystems, then if we would produce a design for the subsystem or a cell the complete system could be synthesized by cascading these cells in series. The problem has now been reduced to that of specifying and designing the cell, rather than the complex systems

In general, we define a synchronous sequential circuit, or just sequential circuit as a circuit with m inputs, n outputs, and a distinguished clock input. The description of the circuit is made with the help of a state table with latches and flip-flops are the building blocks of sequential circuits.

The definition of a sequential circuit has been simplified as the number of different states of the circuit is completely determined by the number of outputs. Hence, with these combinational circuits we are going to discuss a normal method that in the worst case may waste a large number of transistors For a sequential circuit with m inputs and n outputs, our method uses n D-flip-flops (one for each output), and a combinatorial circuit with m + n inputs and n outputs.

3.4 Iterative Circuits-Example

An iterative circuit is a special type of combinational circuit, with the structure shown, The above diagram represents the iterative circuits and this circuit contains ‘n’ identical modules each of which has both primary inputs and outputs and cascading inputs and outputs. The left most cascading inputs are called boundary inputs and are connected to fixed logic values in most iterative circuits. The right most cascading outputs are called boundary outputs and usually provide important information.

Quiet often in combinational logic design, the technique of expressing oral statements for a logic circuit in the form of truth table is inadequate. Iterative circuits are well suited to problems that can be solved by an algorithm i.e iterative algorithm

  • Set C0 to initial value and set i to 0.
  • Use Ci and Pli to determine the values of P0i and Ci+1.
  • Increment i.
  • If i

In an iterative circuits, the loop of steps 2-4 is “unwound” by providing a separate combinational circuit that performs step 2 for each value of i.

3.5 Improving the testability of Iterative Circuits

As stated by A.Rubio et al, (1989, pg.240-245), the increase in the complexity of the integrated circuits and the inherent increase in the cost of the test carried out on them are making it necessary to look for ways of improving the testability of iterative circuits.The integrated circuits structured as iteration of identical cells, because their regularity have a set of advantages that make them attractive for many applications. Among these advantages are their simplicity of design, because the structural repetition of the basic cell, manufacturing, test, fault tolerance and their interest for concurrent algorithmic structure implementation. Here in this journal we also study about the testability of iterative circuits the below figure illustrates the typical organization of an N-cells iterative unidimensional circuit (all the signals go from left to right); however the results can be extended to stable class of bilateral circuits.

The N cells have identical functionality. Every cell (i) has an external input yi and an internal input xi coming from the previous cell (i-1). Every cell generates a circuit output signal yi and an internal output xi that goes to the following cell (i+1).The following assumptions about these signals are considered below

  1. All the yi vectors are independent.
  2. Only the x1, y1, y2………….yn signals are directly controllable for test procedures.
  3. Only the y1, y2 …yn signals are directly observable.
  4. The xi and ^xi signals are called the states (input and output states respectively) of the ith-cell and are not directly controllable (except xi) neither observable (except xn).

Kautz gives the condition of the basic cell functionality that warrants the exhaustive testing of each of the cells of the array. These conditions assure the controllability and observability of the states. In circuits that verify these conditions the length of the test increase linearly with the number of cells of the array with a resulting length that is inferior to the corresponding length for other implementation structures.

A fundamental contribution to the easy testability of iterative circuits was made by Freidman. In his work the concept of C-testability is introduced; an iterative circuit is C-testable if a cell-level exhaustive test with a constant length can be generated. This means the length is independent of the number of cells composing the array (N). The results are generalised in several ways. In all these works it is assumed that there is only one faulty cell in the array. Cell level stuck-at (single or multiple) and truth-table fault models are considered. The set T of test vectors of the basic cell is formed by a sequence (what ever the order may be) of input vectors to the cell.

Kautz proposed the cell fault model (CFM) which was adopted my most researchers in testing ILAs. As assumed by CFM only one cell can be faulty at a time. As long as the cell remains combinational, the output functions of the faulty cell could be affected by the fault. In order to test ILA under CFM every cell should be supplied with all its input combinations. In Addition to this, the output of the faulty cell should be propagated to some primary output of the ILA. Friedman introduced c-testability. An ILA is C-testable if it can be tested with a number of test vectors which are independent of the size of the ILA.

The target of research in ILA testing was the derivation of necessary and sufficient conditions for many types of ILAs (one dimensional with or without vertical outputs, two-dimensional, unilateral, bilateral) to be C-testable. The derivations of these conditions were based on the study of flow table of the basic cells of the array. In the case of an ILA which is not C-testable modifications to its flow table (and therefore as to its internal structure) and/or modifications to the overall structure of the array, were proposed to make it C-testable. Otherwise, a test set with length usually proportional to the ILA size was derived (linear testability). In most cases modifications to the internal structure of the cells and/or the overall structure of the ILA increase the area occupied by the ILA and also affect it performance.

ILA testing considering sequential faults has been studied, sequential fault detection in ripple carry adders was considered with the target to construct a shortest length sequence. In sufficient conditions for testing one dimensional ILAs for sequential faults were given. It was not shown that whenever the function of basic cell of an ILA is bijective it can be tested with constant number of tests for sequential faults. To construct such a test set like this a procedure was also introduced.

The following considerations from the basis of our work. Many of the computer aided design tools are based on standard cells libraries. While testing an ILA, the best that can be done is to test each of its cells exhaustively with respe