Categories for Computer Science

Managing Web Application Security Computer Science Essay

Changes in business environments and the advances in web technologies have made the services of corporate, public and private firms to be more widespread over the web by making use of web applications. Although web services can provide greater convenience, flexibility and efficiency, they also possess great number of threats which could be a significant risk for the organization if not properly dealt with. This paper discusses the various vulnerabilities that web applications present and the best practices to apply counter-measures and mitigate those risks

II. Risks of Web Technologies

A. Omnipresence

In today’s e-world the activities of web users are increasing day by day on the potentially vulnerable World Wide Web. The new impressing applications that are available today are developed using various tools and technologies, whose ease and convenience of implementation had made them so popular and to be widely used. Today almost all the private and government organizations depend on the web technologies and applications to carry out their everyday essential operations.

B. Web Application vulnerabilities

Much of the confidential and financial matters concerning an company and an individual are carried out using web which is prone to many security risks like hacker attacks, sql injection attacks, website intrusion, denial-of-service attack etc. There is an alarming increase in the number of attacks as hackers are finding new ways to attack the system.

The vulnerabilities that are being attacked now-a-days are very different from those carried out in the past years. While some attacks were carried out for pure psychological satisfaction of the attacker, others aim at stealing sensitive data like credit-card numbers, bank account information, and sensitive data from organizations. This has made the organization to spend more on security related aspects.

C. Role of Management

Web application security should be taken care by management by right decisions and techniques. Periodical training sessions should be conducted to bring awareness among the developers, of new type of attacks and threats and how to implement effective security mechanisms to defense their applications or modules against these threats. Securing web applications should be done right from the starting of the project rather than adding at the end of the development process. The management should ensure that all necessary precautions are taken before releasing the applications to the outside world by thoroughly testing them.

III. Top Security risks and Counter Measures

This section discusses three of the top ten security risks of 2010 according to ‘The Open Web Application Security Project’ (OWASP).

A. Injection

Injection is the process of transmitting malicious code to another system through a web application. Malicious Commands written in scripting languages like html, JavaScript, python, Perl etc., are passed to a web application interpreter to exploit the vulnerabilities of a system.

Although there are many types of injection attacks, SQL injection attacks are most widespread.

1. SQL Injection

Sql injection attack involves insertion of malicious sql strings in to input parameters of sql statements, these makes the databases to compromise sensitive information and to view, modify or delete the information in databases by an attacker. For example, consider the following legitimate sql statement that retrieves the matched username from the input query

SELECT * FROM TableName WHERE username = ‘$username’

If an attacker modifies the statement to

SELECT * FROM TableName WHERE username = (‘ ‘ or ‘1’=’1′)

it retrieves all the rows in the selected table because 1 equals 1 is always true, thus compromising sensitive information.

Countermeasures and Prevention

Although injection attacks can be easily detected and avoided, more and more attacks are found to be occurring because of using dynamic queries for taking user input. An attack can be successfully prevented by validating user input, using parameterized queries and stored procedures. While parameterized statements include place holders like ‘?’ to substitute the user input data, the attacker can easily substitute malicious strings in to the place holders. Using parameterized queries along with stored procedures is found to be effective as stored procedures use the already defined code in the database to take the input data from application. However the use of above two methods can affect the system’s performance, so another technique can be used for rejecting the user supplied statements by using strong escape schemes or strings that are pertinent to each kind of statement so the DBMS can differentiate between user input and developer’s code. It is advisable to apply string escaping both on client-side and server-side to provide stronger security.

B. Cross-site scripting(XSS)

It is the process of injecting malicious code in to a trusted website by using a vulnerable web application or sending malicious script to be executed in the web browser of an user. This may result in compromising of sensitive information like stealing passwords, cookies, session information stored in the browser, misshaping of website and also conducting phishing attacks. These types of attacks commonly arise from message boards, discussion boards, newsgroups, mail messages and forums. A user may embed malicious code in tags like Malicious code . When a user views the message the code may be automatically executed thereby exploiting the vulnerability.

1. Stored XSS attacks

The injected code is permanently stored in the database servers, visitor log, fields etc. The malicious code is retrieved when users request stored information. The attack propagates to every user who requests the stored information.

2. Reflected XSS attacks

Malicious code is sent to the server through specifically crafted means like a form, the request is sent to the server and is responded to the user’s browser. The user’s browser executes the code as the respond came from a trusted source

Prevention and Countermeasures

XSS attacks are difficult to identify and prevent. One method of securing is ‘input filtering’ the data by omitting

Managing Web Application Security Computer Science Essay

Changes in business environments and the advances in web technologies have made the services of corporate, public and private firms to be more widespread over the web by making use of web applications. Although web services can provide greater convenience, flexibility and efficiency, they also possess great number of threats which could be a significant risk for the organization if not properly dealt with. This paper discusses the various vulnerabilities that web applications present and the best practices to apply counter-measures and mitigate those risks

II. Risks of Web Technologies

A. Omnipresence

In today’s e-world the activities of web users are increasing day by day on the potentially vulnerable World Wide Web. The new impressing applications that are available today are developed using various tools and technologies, whose ease and convenience of implementation had made them so popular and to be widely used. Today almost all the private and government organizations depend on the web technologies and applications to carry out their everyday essential operations.

B. Web Application vulnerabilities

Much of the confidential and financial matters concerning an company and an individual are carried out using web which is prone to many security risks like hacker attacks, sql injection attacks, website intrusion, denial-of-service attack etc. There is an alarming increase in the number of attacks as hackers are finding new ways to attack the system.

The vulnerabilities that are being attacked now-a-days are very different from those carried out in the past years. While some attacks were carried out for pure psychological satisfaction of the attacker, others aim at stealing sensitive data like credit-card numbers, bank account information, and sensitive data from organizations. This has made the organization to spend more on security related aspects.

C. Role of Management

Web application security should be taken care by management by right decisions and techniques. Periodical training sessions should be conducted to bring awareness among the developers, of new type of attacks and threats and how to implement effective security mechanisms to defense their applications or modules against these threats. Securing web applications should be done right from the starting of the project rather than adding at the end of the development process. The management should ensure that all necessary precautions are taken before releasing the applications to the outside world by thoroughly testing them.

III. Top Security risks and Counter Measures

This section discusses three of the top ten security risks of 2010 according to ‘The Open Web Application Security Project’ (OWASP).

A. Injection

Injection is the process of transmitting malicious code to another system through a web application. Malicious Commands written in scripting languages like html, JavaScript, python, Perl etc., are passed to a web application interpreter to exploit the vulnerabilities of a system.

Although there are many types of injection attacks, SQL injection attacks are most widespread.

1. SQL Injection

Sql injection attack involves insertion of malicious sql strings in to input parameters of sql statements, these makes the databases to compromise sensitive information and to view, modify or delete the information in databases by an attacker. For example, consider the following legitimate sql statement that retrieves the matched username from the input query

SELECT * FROM TableName WHERE username = ‘$username’

If an attacker modifies the statement to

SELECT * FROM TableName WHERE username = (‘ ‘ or ‘1’=’1′)

it retrieves all the rows in the selected table because 1 equals 1 is always true, thus compromising sensitive information.

Countermeasures and Prevention

Although injection attacks can be easily detected and avoided, more and more attacks are found to be occurring because of using dynamic queries for taking user input. An attack can be successfully prevented by validating user input, using parameterized queries and stored procedures. While parameterized statements include place holders like ‘?’ to substitute the user input data, the attacker can easily substitute malicious strings in to the place holders. Using parameterized queries along with stored procedures is found to be effective as stored procedures use the already defined code in the database to take the input data from application. However the use of above two methods can affect the system’s performance, so another technique can be used for rejecting the user supplied statements by using strong escape schemes or strings that are pertinent to each kind of statement so the DBMS can differentiate between user input and developer’s code. It is advisable to apply string escaping both on client-side and server-side to provide stronger security.

B. Cross-site scripting(XSS)

It is the process of injecting malicious code in to a trusted website by using a vulnerable web application or sending malicious script to be executed in the web browser of an user. This may result in compromising of sensitive information like stealing passwords, cookies, session information stored in the browser, misshaping of website and also conducting phishing attacks. These types of attacks commonly arise from message boards, discussion boards, newsgroups, mail messages and forums. A user may embed malicious code in tags like Malicious code . When a user views the message the code may be automatically executed thereby exploiting the vulnerability.

1. Stored XSS attacks

The injected code is permanently stored in the database servers, visitor log, fields etc. The malicious code is retrieved when users request stored information. The attack propagates to every user who requests the stored information.

2. Reflected XSS attacks

Malicious code is sent to the server through specifically crafted means like a form, the request is sent to the server and is responded to the user’s browser. The user’s browser executes the code as the respond came from a trusted source

Prevention and Countermeasures

XSS attacks are difficult to identify and prevent. One method of securing is ‘input filtering’ the data by omitting

Design of Internet Service Application on WAP Device

Abstract

The ultimate aim of our project is to provide the design and development for an reliable internet service application which workable on the WAP enabled wireless handheld devices like mobile phone ,PDA(personal digital assistant) but mobile phone will focused mainly on this. To design a WAP application that will be usable by stock market user to receive and check related information like up-to-date stock value , stock quotes, as well as price detailed, trading status in the service directly from their mobile devices.

This project concern the developing, testing of a WAP based system for wireless handhelds devices like mobile phone to help the stock market user while on the move for instance the user can access to internet even when they travelling from geographical location to another . the idea of this application is to consider one that in principle not only delivering that data which involves in stock market updates and reliable and securely but one will be rapidly aware the user about the market changes in real time without even in the whilst in office or home , whilst we travelling , to the demand form of communication specially whilst “o the move”

The Stock Market system will be based on WAP architecture for mobile devices. This project will demonstrate how practical and possible will it be to use a mobile internet on the mobile phone to check the stock market value for a particular company whilst on the move. the user can have access to know the current price of the stock and it latest information

At click of the button without physically present in front of the stock market office or by watching Television at the someplace. The whole project framework is aim on to Create a WAP application which based on a server side application which could be the company to join a venture with the user’s to input the data to form a given result. And it important to aware of the modern days application are adapted by the user to improve their capabilities. According to the research board of project believe that the user application projects are lead to failure if it not fully obtain the user demands and requirement for particular needs.The possibly good market for such system will be investigated into further develop to refine a set of clients requirement . there will be a good response for such an application in the market which intended on user who has invested a solid amount in bonds , shares and gilts , and these are all designed to make life easier in such a way it enable the use to make decision at particular time to sell and buy stock such time .

1. Introduction

1.1 Brief

Stocks are considered riskier investments ,the value of a stock could at peak and drop down on the rate of the share by every time ,it completely depends on the situation of economy .for instance I am a stockholder having 500 share on a private company. Currently the value of my share is at good rate,and if I want to know what is market value of share with other company .Likewise most of the stock market holder are keen to know the latest information in real time .However it is only possible when the share holder are have a chance to watching news or log on a computer that they would be aware of the situation. Nowadays, most people on the move have at least one mobile device, a mobile phone or a PDA .so why not provide the stock market information as soon as possible directly on these mobile phones?.It can be either on a push basis or pull basis .push basis a like a push message it automatically send the latest information to the user whoever subscribe to the push service , in other hand pull message where the user have to send a request first to get the required response.

Most of the people nowadays having mobile phone along them and from which would the user also can access to the internet and other communication uses. Currently the mobile phones are very cheaper and popular in the market when we comparing with other kinds of electronic device However the mobile based stock market tracking system will allows the user to check the latest information of the stock value in the market.

There is good market scope for mobile communication due the feasibility and support of the communication at time . and other important of mobile internet is that a user can to be in up-to-date information on necessary thinks needed like weather , flash NEWS, email etc . this would be a viable alternative from traditional fixed internet for PC’s and implementation of the mobile internet which is very convenience for us.

Like Benjamin Franklin so rightly said, “Forewarned is forearmed”. So, on consumers’ point of view, yes, there is a potential need for this application. Moreover, most mobile phone network providers nowadays offer certain amount of free internet browsing on their mobile handsets each month. Users would prefer to use their free offer to check for stock information rather than dialling or texting a phone number, which are usually charged at premium rate, to get that same information. In terms of benefits for the stock exchange company, this will be an improvement of their customer service and keeping up-to-date with new technology. Staffs involved with providing information over the phone for those travellers who still feel at ease with that means, will be able to provide a better quality of service as he or she would not be under so much pressure of dealing with all the calls waiting in the queue. Moreover, staffs at the will fewer enquiries to deal with as commuters will be informed well in advance.

As author Ben Salter and Alex Michael mentioned .”thirteen per cent of mobile subscribes reported accessing news and information via a mobile browser in June 2004.” [ REF 1 ]

The both author believe that current number of user accessing the mobile internet to browser will increases in feature

1.2 Relevance to course Modules

The product based project are really challengeable to undertake but these are possible and manageable from the modules taught over the degree few of them are very help in understanding the cirtical part of the project. Some of the important module are helpful to this project are dicussed below.

1.2.1 CCM2418 (Digital and Mobile Systems) and CCM2420 Data Communications

These both modules are very useful and module which let me know about fundamental communication knowledge .we started with the simple data how it goes through different stage of process and to final point of the information at that level were we want know about web connection level what involves in modulation , demodulation , multiplexing and error checking

1.2.2 CCM3415 (Advanced Network Design and Security) and CCM2412 (Network Routing and Protocols)

In these both modules were important and necessary because most of the topics we learned were associated to computer networks, and it is contributed a very important role in this project , the areas have taught in these modules includes the fundamental of the network which OSI layer model . And another important area which is protocol on the network that are the set of rule govern the network . this was the whole set of the internet , which is the keen important to largest network

1.2.3 CCM3413 (Mobile Internet Applications and Services)

This was the module inspired and gave the knowledge to choose this project ,actually i learned the fundamental of the two-tier architecture , WAP technology , PHP , these topics are playing a main role in this project , module given a full scope of the internet and their uses in different device but we mainly focus on the Mobile Internet which inducted me to undertake this Project

1.2.4 CCM2426 (Professional Project Development)

In this module we learn how to work in group and make it as a team. We all worked together and put all our work in to one piece of report. And we learned business ethics and how to implement them in the real life and we have discussed about computer science ethics which some very important issues in the modern day computer field which includes legal issues , copyrights and piracy theses information areas will be more helpful when we starting the marketing and involved in the business related application then there we have to implement this strategies . in past few years piracy issues are increasing in a very dramatic way due the high speed internet available and this reflected on the computer business and they seeking for legal expect to avoid this kind of problem in future .

1.2.5 CCM3422 (Computer Communications Project)

This module has take place a major role of this project, according to the principle of Computer Communication Project, it involves in to scheduling and estimating technology

1.3 Literature Review

My project is consisting of many different numbers of components which performs in different step in its construction and completion. to get this project as one one working model or product ,it has to process through many tasks and topics. And each and every topic in this project are constantly analysis and studied so that all every aspect are fit and work as one whole product . This project are covered from different topics and areas that were research is been taken form many source which includes text books , online materials journals and past projects . some important resource are used for the research are mentioned below with reviews and definitions.

D.Houghand K. Zafar according to their point of view .there are millions of user subscribes to mobile network .these are include everyone from computer executive director to unskilled ordinary labour (Bale Bulb rook). With the benefits of the WAP enable mobile device such as mobile phones, Personal Digital Assistant (PDA ) and pager are developed to access the internet right on the mobile device able to access the phone banking , price checking , product purchase or sale , any sport result and much more

According to authors E.Evan ,P.Ashworth .the wireless device are increase in dramatic way and number of user to communicate , interact on the move, there is a survey showing that right now there is least One billion wireless are subscribes and soon will be increase to twice the amount now (P.Ashworth and E.Evans ,)

There are so many user are subscribe and using the WAP. With help of WAP people are getting the same usual format option as exactly From A personal Computer but these are depends on our convenience and so this will lead to big revolution of the internet in the mobile device and change the traditional way of accessing the internet .Eventually public will notice the real impact of WAP devices how it will change the way of accessing internet and they would know the benefits of using this technology , more user will subscribes to network and it will start to grow on numerous way on a new form of communication because it will start using the new will get start using the new technology and they will reply on the advantage ( Frerrel, ZOO )

In Karli Watson’s book (beginning WAP) . when we considering development of the today’s communication world some factors are comes first but wireless communication having big impact and revolution over the public and number of users are incredibly increasing at a dramatic amount of subscribing are using and amount of data interchanging in speed at which we access the business and another uses . In less than ten year time it will have a tremendous amount of development from initial level of the internet user from the traditional network

But there was a doubt on the factor of reliability and usability of the WAP access, On 2000 Ramsay and Neilson have noted in their book that the WAP is not really functional and popular now but it will be score a good reputation and developed system after 2001.Due to scale of operation grows it will led more difficulties observation.

To learning Unified model language (UML) introduce will helps and leads as work along with language focused in this book , the author help to keep the clear view on the UML language and help to protect in the cobwets of methodoly.

Fred R.Mcfadden and Jeffery A.Hoffer, , are interest on information resource management (IRM) are here to consider as normal way we do with everyday methods ,equipment . in other word the matter of information is a superior component and business resource and must has to treat as other asset such as student , things and etcs. Mcload and Brittain , are suggesting the basic ideas of IRM from their point of view

According Ian Summerville’s book is focused at student who undertaking the project in under graduation and graduate at software engineering . they have least know and done some fundamental engineering course such as programming languages . This book is full of general and basic idea of new update on some important topics such as , dependable system , requirement engineering, and architecture design.

1.4 Analysis from Literature Review

This application is to designed to track the Stock Market status report through WAP enable device . the idea of this application is to provide a service for the user to check their desirable company stock rate on the particular date and a response will send back to user using programmed database and techniques.

The main purpose for this application was to reducing time , developing the reliability and efficiency of use when comparing with current application or system .

According to the Ramsay and Neilson personal view of WAP and they believed that WAP had a very big impact on the today’s technology that we been using ever since it introduces of the this system in the market .it had a breakthrough with other opponent technology but it still had better critics among with other technology

P.Ashworth, E.Evans were best authors where keen to know about the feature of mobile communication , actually both were intended to show in their book , to tell what is the feature of M-commerce as of from this decades to next decades development opportunities. M-commerce is the combination of the traditional web system and the wireless connection,

Farrell were interested in studying the future of WAP technology to access the internet on mobile service and tried to explain the benefits of this system over people to use their mobile device to access the web browser or internet , even as our today’s live is becoming more amd more difficult by day after day so technology might have to help the problem with changing it same system or service for instance ,internet is access from system which through traditional network but would change it technology and developed into wireless internet that is possible by the WAP technology .

But according to Ian Somerville in their book Software Engineering ,they had a new form of ideas to develop and view a project by eight different parts which include an brief intro to software engineering , application designing , coding , validation and verifications , critical systems ,managements and software evaluation .this book also covers the system distribution architecture , and software development .the critical software system in consisted and integrated with the reliability , accessiably , security and avaliabilty so these are mentioned as a different topics.

Jeffery A. Hoofer and Fred R. McFadden in thier book Modern Database Management they were talked about the database in a larger sector how the larger context of information resource are stored and manage in a dedicated database .

In other hand Craig mentioning about the life cycle of the software and he says that UML depends on the different life cycle development model i.e. Spiral Model , Incremental Model and finally Waterfall Models , UML is depends upon the object oriented programming paradigm . so I am using one of the best model software life cycle model which Watetfall models.

2.1 Aims

The ultimate aim of this project to create and build a stock market quote that allows the user to quote the stock company and accessing these are all done by selecting appropriate option on the device to have a customer needs. And there will a database is designed for the data to processed from the user and they able to pass their won information.

2.2 Objectives

The objectives in project will be divided into two different informant categories ,in each category will further divided into subsection so that each and every phase will be individually discussed and this will lead to achieve a best possible end result of the Final product .

Design based objective:

Design based objective actually it involved in idea for framework the specified requirement for the user needs. It will really help the consumer and design from business prospect to achieve the best possible result . To build a system following key point must accomplished

  • Modelling the system from requirement specification
  • Implementing the requirement specification of the system using appropriate development tool
  • Of course , testing ,evaluation and debugging the protocol against the system requirement .
  • A created database will be use as a model for this project instead using the stock market databases

Research based objectives:

In order to give up- to- date software or system . we must have to be focus and to certain to changes will happened on the current system in terms in future development and challenges and often happened in the software industry.

There are some of important ways for getting the software awareness by following few steps

  • By examining people who using the daily basis and observe main fact toward the stock market assessment involves by taking surveys and feedback on the web pages usability
  • Offering alternative or current option available to stock exchange quote and their mentioning their advantage and drawback.

2.3 Software Life Cycle and Methodology for the Project

Software life cycle which related to our daily life process and it filters all process

over each and every stage in our life. It is pretty commonly known fact that relation pound exits between the product class and process quality is directly equal to the result of the Product is created . the software development is neither inevitable nor efficient to produce good piece of product ,but it is also very ease to keep your product as it is . A software life cycle for the project is model used to describes the phase or stage which a software development goes through . there is other way we can also describes a software life cycle models as “A software life cycle is vision of the perspective active that happens during software development “[REF 5] . these factors were helping us to understand and improve the basic characteristics and activities of the software life cycle

The software life cycle is best way to describe overall process involved in the project. Some important processes are described below.

  • Help the project manager to keep the project on the track
  • Describes the basic function that are excepted through the Project to per perform in period of software development
  • Describes the major stage or phase of the software development
  • Provides a general outlook od details tasks so that the we will know the every process involves in software development

This Project is development by the waterfall model. Waterfall model is method used for software development process ,it goes through serious of precious step by step method and process is looks as like as waterfall .The advantage of using the waterfall modal is we can organized ,chronological approach to software develop of our projects than beginning at system level and download to next steps .forward through software development in analysis, testing, coding and management . If the software is developed for large scale system so the effects need established for all system fundamental level and assign some division of particular tasks requirement through of software..

2.3.1 Waterfall Model:

Waterfall is commonly used and adopted in software development field . From the Waterfall Model the software development is very simple and straight forward.

The software development will simple as it iluustrated diagram and it says that will easier as ever before from any model . actually Waterfall model is designed to develop the software as simple method of order of segments as figure illustrated below, every segments and stage or phase are put into a distinct goals so that the any segments contributed to a software development are achieved successfully.

Software Requirement

In software development it is vital to gathering requirement of the software and analysis process is very important for a Software based project to build up and focus of the software .In order to know the nature of the program itself to built, one should understand and analysis the requirement of the software and domain need. To gather the requirement we have form and fellow a simple method with some of topic needed to clarified in order to get started this Project.

Design a software

Software designing naturally is a major and complex step process. To achieve a design process successful we have to interprets requirement into representation of the software. There so many reason and factor get the error on the software design.

Coding/ Programming

The designing a software which involves in varies of area to accomplished to form in machine-readable . so code generation is takes place in this step. In this step we actually test the code of our project.

System evaluation/Testing

Once the coding has done, the testing process begins. In this step it will point out any thing on the consist and logical parts of software to make suer which the given statements are correct and it have been tested. And another important part of test is function external testing and it is carried out to make sure that defined input statement will pass and produces actual result

Maintenance

Any changes or an error could happened. Once the Product has reached the end user which is the consumer . so it is important for a software to adopt into the real world use .the conditions of changes might happen on both side of the parties for instance If the consumer would like to change the option available on the current system by any chance. So it is vital for software development to maintain all the option and feature are must to be up-to-date

Project Management

Project management activities which involves in schedules ,planner, organization and creating tasks and document like Risk plan, SRS, configuration management and software development plan were produced .so for my Project i decides to use Work Breakdown System as project management

2.4 Work Breakdown System (WBS)

WBS is a break down system which is commonly used method in all computer project to achieve all tasks in the project to produce to one workable system. With help of this sort of methodology which help our project to on the right track . in our case to produce a WAP based system to stock market on the mobile devices.

3 Problem definitions

Someone who is travelling or is away from their workplace, it is often not possible to gain access from a fixed internet enabled device. But in home environment we would able to access to internet via the local network . Mobile devices are playing major in today’s communication world and it becoming very usual and common thing just seem for time. This seem like without this one might thing like a breakdown in the milestone of the telecommunication in this time . Good thing is almost all of latest mobile phones and devices are embedded with WAP-enable so this encourage the user to use this kind of services and the connection are made device by connecting to server which is provided by the network service provider, so therefore provide a service which keep market users, , who spend a proportion of their typical day in the process of accessing to the stock market and tracking on market , latest informed of stock variation, stock quotes, as well as price detailed, trading status, current company news & profile .

The big question is whether, there is a potential large scale need for such type of service by the stock market. Stock market information is not easily available perhaps unless you are make a phone request or log on to the Internet; and while on the move it is often very difficult to get internet access without a WAP enabled mobile phone or PDA.

It is quite simple to imagine that technology is backbone of today’s modern world, in this technology world the most important rule is played by the Software System. Almost everyone have to come across with the modern technology every day after day. But few technology development have made tremendous implement with presenting a new form of application performance , attached feature and way of interaction between the user and system these systems are really capable of do anything form a low- level basic need like TV remote to highly sophisacted and sensitive system like Bank transaction these all designed for one single purpose make life easier.

Wired communication systems have as their main disadvantage the lack of mobility; wireless systems have the big advantage of supporting mobility of system use.

In this modern world technology are getting even better ,not only in technology also in way of communication are now .The development of the mobile internet, mobile devices, applications and the provision of services such as the one proposed here attract considerable attention and effort in these time; there were number of studies and research going to find an efficient mobile internet for mobile phones and other wireless devices

One general requirement to ensure that the security of client there are variety of virus and attacker are waiting to corrupt or stole our data from us . so it important to guide the user to access the relevant information easy and rapid . there are two way for safe surf on the mobile internet

  1. Allow the user to surf the particular market report on the relevant web site on the micro-browser
  2. To allow data streaming to capture the information on the mamory

User interface is another issue in mobile device ,due it small screen it just display very few information in simple manner .

Brief product description:

  • It is a client-server based application which will support multi-client. For example: User will input share company the which will be send as a request and the server should respond by sending the real time stock market value of the company share has traded over the previous 40 weeks
  • It should be simple, with a suitable user interface and little user inputs so that the result is obtained the as soon as possible
  • It should also support push services, where certain stockholder will be automatically alerted on their mobile should any changes in the service status occurs. The server should automatically push the information to the specific registered mobile users.

Deliverables and Development Requirements

To successful completion of the project the following items in list will be given in the final submission

A hardcopy of the Project final report

A softcopy of the Project final report

A turnitin copy of the document

Program code on the CD

Software and development requirement used document

Development requirement of the system are:

MySQL Server

Windows Vista

Nokia series

Designing a Application from user point of view.

As stock quote is never easy before computer start to serve the user demand ,as a stock market person has to know the rate of the stock till up to date but these are possible by Internet as simple as single click you would know the value from past record to till now . In the period when the computer are not as personal computer those days the person itself has to find the stock value through Walk-in to the local newsagent to buy NEWS paper or Television or even Radio but this media were not that reliable as today’s technology which Internet, computer ,wireless. But technology start to change the way of stock market working from the traditional method . However our aim is to provide a reliable method for a person to access the stock market condition as easy as possible by using it. So we have to design a application which will server the user need in simple way .,to fulfil there need as application designer has to thing generously from user point of view. Some of the questions are striked

  • Is this application or service are able to served the user needs or it just focus on business prospective .
  • Is the user getting any useful and benefit from the application or web page is been designed .
  • How reliable is this application whether is based on other source or direct information from the stock company
  • User interface is most important part for a application ,it must has to be easy to user to use .

To design the WAP application we have to gather relevant information or opion from stock holders who really would get benefit from this WAP application as a user. I managed to contact two persons who using the computer to keep to know stock market rate. Their ultimate demand is that interface must has to simple and provide the useful information as simple as possible like simple graph, etc. So the application is designed to help the user so interface has to simple but not to find how to use it, this will leads poor impression on the application. However everyone has different thoughts from thinging of what they want to make them comfortable well i concluded the thoughts of stock market users what do they want the application has to appear and help them to easy way of tracking the current value of the stock market.

Ease of use:

As a normal user cannot read manual, particularly for some simple service like sending a message, so if they cannot know how to use the application ,they get frustrated and tent to leave the application altogether. That means if the application is not fully intuition, then people will get stuck and we might lose them forever. The user will not come to figure out how to get the application to work .Another ease of use factor is the difficulty of getting to a URL.

When designing a software for certain purpose where the user could get useful by that . The software itself has to serve the purpose that made up for what .There is major role for design a application how it will become useable by user. However ,if the user’s goal and the applications are not same ,there may be a problem . for example user want to know the previous stock market rate and average of the rate by performs of the particular company stock from past fifteen days data report , in order to plan for feature plan and to analysis on the market .And if my application only provide the stock value for the past seven days ,in this case the application is not provide the solution to the user demand.

To provide the full potential of the application to user by developing or designing the option in the application as simple as possible o obtain the what is wanted to use by the user. For instance If user if user is holding stock in five major companies shares ,for him or his need to see the top companies which he or she interested should show at first place instant of thousands of company from the list . So in here we deploying the user reliable context to reduce complex problem into simple solution. We can able to attract the user or customer to use our application or provide.

The application must have to provides critical inf

Biometric Technologies: Advantages and Disadvantages

Abstract

There have two aims of this project.

Firstly is to provide an objective analysis of available biometric technologies, to identify their strengths and weaknesses and to investigate a broad range of application scenario in where biometric techniques are better than traditional recognition and verification method.

Another aim is to develop a product. Now a day most of the online banking and financial organization are trying to convert their existing online banking in open source Java or in some other open source platform, so that it could be more reliable, secure and difficult for the hacker to hack such open source management system. Most of the systems are still using the login ID and password typing functionality which is not secure at all as anybody can steal password by using a hidden Keystroke logger or like this sort of software and another problem is user need to remember so many password and user ID for different web services. From a statistical observation it found that more than 70% people write down their Username and password, which can be stolen, lost and can be misuse by others. If the organizations could integrate secure fingerprint or any other biometrics built in functionality then it could be more secure, reliable, easier and hassle free for the user.

To get ride from such problem I have tried to develop such a model of secure web service integrating with fingerprint recognition where users no need to remember or insert anymore user name or password. Although there has lots of password replacement fingerprint software available in the market but as my knowledge such software doesn’t work for completely platform independent (Java based) secure web service. I have used platform-independent Java 2 Platform Enterprise Edition (J2EE), Netbean, Jboss server, sql data base and open source bio-sdk to develop this model.

Preface

Although this web service has integrated only with the fingerprint functionality due to limitations of hardware and other resources but in here has critically investigate about the strengths and the security hole of other biometric functionality, so that in future such biometrics functionality can be imply.

Another constraint with regard to this report is time. To provide more strength and security for that system, many features could be added like development of better algorithm to fix the security hole of the fingerprint software. To cope with the time changes are an inevitable part of the software or web service development but many have been entirely avoided in this case as they would not have added any value to the principal purpose of this project.

Problem Areas for that Project

Biometrics is a young technology, therefore relative hardware is not that available in the local market and they are so expensive to buy personally.

Unfortunately there is no biometrics hardware in the CMS’s hardware lab. As well as there is no biometrics software or equipment. It was requested to buy some hardware for this thesis purpose but unfortunately the university was not agree to buy or manage anything which is related to biometrics.

Many companies of this biometrics fields were requested personally to help or give information regarding their product but they denied for the marketing reason.

There was no biometrics related books in the university library. Moreover the library was unable to provide.

So without any technical and theoretical support it was really hard to gain new idea and to make a new product which is related to the biometrics.

Some biometrics hardware has been bought personally for this thesis. With the extraordinary help, advice and encourage from the supervisor this work has been done.

Section One: Background Literature of Biometrics

Chapter 2:

Background Literature of Biometrics

Now a day biometrics is a well known term in the information technology. The origin of the word biometrics comes from Greek language. Bio means life and metrics means measurement. So the biometrics is related to the measurement of a living thing. But in the information technology it means an automated process where a human is recognised or identified using his/her physiological or behavioural characteristics. The specific physiological characteristics is collected, quantified, measured, compared with the previous stored characteristic and decided. So it is the process for the identification not any innovation.

2.1 A short history of biometrics:

In the normal life a person has been recognised or identified based on face, body structure, height, colour, hair etc. So in that sense the history of biometrics identifiers or characteristics is as old as mankind history. In the ancient East Asia, plotters used their fingerprint on their products which is the identification of individual. In the ancient Egypt the people use some characteristics such as complexion, eye colour, hair, height to identify trusted traders. But for a long time biometrics had not been considered as a field of study.

At the late 1880, the biometrics gained the interest as a field of study. The credit was Alphonse Bertillon who was an anthropologist and police clerk. He was tried to identify convicted criminal to others. He first discovered and mentioned that some physical measurement of an adult human is invariant of time. These combinations of measurements are different to human to human. So these measurements can be used to recognize an individual from other (Scottish Criminal Record Office, 2002a). His theory was known as Bertillonage or anthropometry. That time his theory was appreciated and thought to be well established. The main measurements which he suggested are given in the picture 2.1. But in the year 1903, it was found that his theory was wrong for the identical twins. That time an identical twin was found, according to his theory they are single person. So the new theory or new characteristics were looking for the identification.

It was said that Sir Edward Henry was the first who interested on finger print for the purpose of identification. He was an Inspector General of Bengal police. In 1986, he ordered to record the prisoners fingerprint as an identification measurement. He tried to introduce the classification system of the fingerprint. In the year 1901, Sir Henry was joined as Assistant Commissioner of the Scotland Yard. After then a finger print bureau was established. That time the failure of the anthropometry system made the finger print system well known. Finger print system was started to use for the purpose of identification of a person. The system is used as same way still today.

Automated system to read finger print was first introduced in the early 1970s. The first finger-print measurement device was first used in 1972 which was known as Identimeter. This device was used at Shearson Hamil named Wall Street Company. The purpose of this device was time keeping and monitoring.

Day after day the interest of this biometric system was increased. The decrease of the hardware cost of the computer and improvement of the algorithm increase the research of the biometrics.

2.2 Biometric characteristics:

2.2.1 General requirements for a characteristic using as a biometric identifier:

In the biometric history section, it has been discussed that several characteristics were consider as an identifier of human. But many of them were rejected. According to the Amberg 2003, if a characteristic can be considered as an identifier for the biometric purpose then it should mitigate some requirements such as university (Every human should have that characteristics), uniqueness (That characteristic should be different person to person), permanence (that characteristic should be permanent) and collect ability (that characteristic should be able to collect and that should also be measurable). There are some additional requirement can be applied with a these requirement such as performance (It accuracy should be high, it should need minimum resources), acceptability (it should be accept everywhere and it should also be acceptable to the future users), fraud resistance (It should have higher security level and can be resistance to fraudulent), cost effective (it users benefit should be many times higher then its using cost).

2.2.2 Classification of the characteristics which can be used as biometric identifiers:

Biometrics characteristics or identifiers can be categorized into two groups. They are Physiological type and Behavioural type.

Physiological type: This type of characteristics is related to human body or anatomy. Finger print reading, DNA analysis and face of individual which are frequently used as biometric identifiers of this type. The use of retina and the iris will be prospective future. This type pf characteristic can be divided as genotype and phenotype. A group of people can have the same genotype characteristics. Blood group, DNA analysis these are the two most commonly used genotype characteristics. In contrast to genotype characteristics, phenotype characteristics can be having only single individual, so this type of characteristics is different from person to person. Finger print, retina and iris are this type of characteristic.

Behavioural Characteristics: This type of the characteristic is related to human behaviour. Signature is the most commonly used characteristics of this type. Human voice analysis and key stoke is another two characteristics which are now also be used. This kind of characteristics is the indirect measurement of the human body. This type of characteristics has been learned or trained; therefore these can be different from time to time. But when a human reach in a certain age, the change of behaviour is negligible, therefore these type characteristic used as identifiers. In the 2.2 the frequently used biometrics characteristics have been shown.

2.2.3 Contrast of the biometrics characteristics:

A contrast of biometrics characteristics has been given in the table 2.1.

Table 2.1: A contrast of the biometrics characteristics (Jaine et al. 1999)

From the table 2.1, it has been said that the physiological characteristics have the better performance then the behavioural characteristics.

From the table 2.1, it has also been seen that some biometrics trait can be regarded more universal, unique and permanent then the other. Such as Iris, DNA, body odour and finger print. But the Iris, DNA and body odour are promising, they need future research and Experiment. Their cost is high, so they are not cost effective. So, now in present the finger print is one of the most accepted biometric traits.

2.3 Establish Identity

Now a day society has been changed significantly. In the past, everyone of a community knew everyone. But now a day, globalization has been changed the situation. Peoples are now interconnected electronically. They are mobile all around the world. So establishing identity is one of the most important task.

2.3.1 Resolving identity of an individual:

There are two fundamental problems occurs for this purpose. They are authentication and identification.

Authentication problem: This problem is also known as verification. This problem arises to confirm or denied anyone’s claimed identity. When any person claimed an identity then this operation process required a comparison. The comparison occurs between submitted biometric samples and the stored samples for the claimed identity. This process is called a ‘one to one’ comparison. For an example an ATM (automatic teller machine) can be considered. For ATM machine the authentication problem has been solved in a two stages process. First stage is to possess a valid ATM card. The second stage is to know the PIN (Personal Identification Number). If anyone know the other person’s PIN and possess his/her correspondence ATM card then that person can claimed the identity of the original ATM card owner identity. This kind of fraud activities have been increasing day after day. According to Jain Et Al, 1999, In 1996 ATM associated swindle activities valued in USA 3 billion US dollar. In the other hand biometrics system promotes a system which can overcome this authentication problem.

Recognition problem: This is also known as identification problem. This problem occurs when a person has been identified from a set template of database. In this problem the person’s data has been compared against the data from the database. It is ‘one to many’ system. An example would help to clear the concept. To identify a criminal a law enforce officials some time lifted finger print or other data from the crime scene. After then they compare the data with the stored data of known criminal. By this way they might be able to identify the criminal.

According to the UK Biometrics Working Group (2002), all the biometric matters does not included in the title of verification and identification. Therefore three more pair of terms has been introduced. These three pairs are (1) Positive claim of identity and negative claim of identity, (2) Explicit claim of identity and implicit claim of identity, and (3) Genuine claim of identity and imposter claim of identity.

Positive claim of identity is also known as positive identification. In this process the claimed person’s identity should have to be enrolled before and known to the system. An example would help to realize the process. An online email account customer enters his or her login name and password into the system, the system compared the combination of these two against a set of data where customer data has been stored before. If the combination of the login name and password has been matched then the user has been verified. The process needs only the login and pass word nothing else. So the email provider does not know who is actually using the account.

Negative claim of identity has been known as negative identification. In this process the claimed person’s identity has not been stored before. So the claimed person can enters only one time, after entering his/her identity has been stored in the system and he or she cannot enters again. Such kind of example is American Social Security. According to the Jain Et Al, 1999, around a billon of US dollar has been taken away annually by using multiple identities from the social security welfare in USA.

In the case of Explicit Claim of Identity, a person unambiguously declares his identity to the system. The claim may be negative claim or positive claim. His/ her submitted identity has been compared with the stored data in one to one comparison. (One to one comparison has been described in the authentication section). Using ATM card is an example of the positive explicit claim of identity. To realize the negative explicit claim of identity, consider an air port where the face recognition system has been established. If a passenger is similar to a known terrorist person then the system would raise the alarm. Then the passenger needs to claim the explicit negative claim of identity. So the passengers other identity such as finger print, iris etch has been compared against that known terrorist in one to one basis comparison.

Implicit claim of identity can be positive or negative claim. In this process a person’s identity has been compared in ‘one to many’ comparison basis against all stored identities.

When anyone claims an honest claim to be himself or herself then it is called the genuine claim of identity (UK Biometric Working Group, 2002). In this case his / her identity has been truly matched with the stored identity.

Imposter Claim of Identity is the process where anyone claims to be someone else is deceit or false (UK Biometric Working Group, 2002). In this case submitted identity does not match with the stored identity.

2.3.2 Verification Technique:

According to the Mitnick, 2002, the Verification technique can be divided into three types. They are (1) Knowledge based verification technique, (2) Token based verification technique and (3) Biometric based verification technique.

Knowledge based verification system:

In this process some information has been used, that information is secret (combination of pass word/PIN/Memorable words etc), usually the person of the original identity has been supposed to be acquainted with secret information. People may travel from distance to distance, so that their memorable secret information will be with them. So it can be said that it will be suitable to use from a distance or remote place.

But this type of authentication has some serious drawbacks. By using Trojan horses and Spywares a hacker can know the others secret information. Trojan horses and Spy wares are able to send the key stoke as email. So this knowledge based verification is not a secure system. Most of the times people use their known name as secret information for the knowledge based verification system. So, it might be possible for the others to guess. Sometimes people do not change their secret information in the knowledge based verification system for a long time. Their secret information is not secure. Sometimes they keep their initial secret information, so that it might be easy to hack. Many types of hacking methods have been developed such as dictionary attack, Hybrid methods, brute force attack etc.

In comparison to other technologies, this is cheap and has a large level of security stage.

Token based verification system:

In this system the claimed identity person should have something which should be used with the secret information. ATM card is an example of the token based verification system. It can be said that it is more secure then the knowledge based verification process because if the token has been lost or stolen then its user can notify.

Biometric verification system:

In this system users biometric distinguishing characteristics such as finger print, face, signature, etc have been used which represents the user’s appearance. These characteristics are moved with the users they are more secure compare to the other two systems. It is quite impossible to use by the unauthorized person. But this system is relatively costly.

Actually no system is fully secure. All of the three systems have some serious drawbacks. Secret information can be hacked, unauthorised person can stole the token and use that and it is also possible to copy biometric information and later replay those (Woodward Et Al. 2003). In order to counter these drawbacks, multiple verification systems can be used. ATM card is an example of the combination of knowledge based verification system and token based verification system. If in the future, the iris scanner is available then it will be more secure if iris scanner has been used with the ATM card.

2.4 The components of a general biometric system and their function:

A general biometric system can be divided into five subsystems. They are: (1) Data acquisition system, (2) Data transmission system, (3) Signal processing system, (4) Data storage system and (5) Decision making system. In the 2.2 a general biometric system has been shown.

Data acquisition system: It has been assumed that every biometric system has two characteristics. They are uniqueness and repeatability. Uniqueness represents that every person’s biometric trait is different. It will not be same for the two persons. The repeatability represents that the biometric trait will be same over time. In this acquisition system the sensors measure the user’s biometric characteristics. These characteristics are said as samples which have definite attributes. The type of presentation and the reader quality can affect the sample qualities.

Data Transmission system: Most of the cases the data collection and processing is not at the same location. So there is a one subsystem which function is to transfer the data. In the data transmission system, compression and expansion has been functioned depend on the size of the sample. The standard protocol has been used for compression and expansion. When the facial image has been sent JPEG format has been used. WSQ format has been used for transferring the data of fingerprint and CELP format has been used for the voice.

Data processing system: there are three parts of signal processing system. They are: (1) feature extraction section (2) quality control section, and (3) pattern matching section. At the extraction section the appropriate biometric data has been split from the background information of the sample. This process is called segmentation. For an example, in a face detection system facial image has been separated from the wall or other back ground. After the extraction the quality has been checked. If the quality of the data is very poor then another sample has been asked. After this section, the pattern matching process has been started. After then the decision making section. Featured data from the pattern matching section has been stored to the storage section depends on the function of the overall biometric section.

Data storage section: From the pattern matching section, some featured of data has been stored as data storage section as template. The main purpose is to compare with the incoming feature. If the overall system is based on one to one matching then the data storage section can be decentralized but if the overall system has been functioned for the one to many matching then the central data base has been needed.

Decision making system: Quality score and the matching score have been sent to the decision making section from the processing section. The decision making system decide the sample has been accepted or denied. The policy is specific depends on the system security expectation. If the number of false non match incident has been increased then the number of false match will be decreased.

2.5 Performance of a biometric system:

The main focus of a biometric system is to ensure the security where only the authorised used can be accepted and non authorised users are denied. The system processing speed is usually given to less priority. The main considerable factors of a biometric system are mainly described by some terms such as Failure to En-roll Rate (FTE), Failure to Acquire Rate (FTA), False Acceptance rate (FAR), False Rejection rate (FRR), False Match Rate (FMR), False Non Match Rate (FNMR) etc.

False Match Rate (FMR): This represents the serious type of fault of a biometric system. This occurs when an authorised users biometric information match to an unauthorised person’s identity. In this case the signal processing system produces a high matching score of a non corresponding template.

False Non Match Rate (FNMR): In this case the authorised person’s biometric features are unable to produce enough high matching score to qualify. This is the opposite of FMR. One of the main reasons of FNMR is partially less quality of the biometric features.

Comparison of FMR and FNMR for the different biometric system: The main aim of a biometric security system is to reduce the rate of False Match Rate (FMR). On the other hand if the False Non Match Rate can be reduced then the system will be more fast and reliable. But all the time there is a relationship between FMR and FNMR. In the 2.4, relationships have been shown for different biometric system. Higher False Match Rate (FMR) is not acceptable, but for the low FMR the False Non Match Rate (FNMR) is considerably higher in every system.

Failure to En-roll Rate (FTE): Sometimes the biometric system cannot make a valid template for some users. Although biometric characteristics are universal but some case there are differences. For an example for a very low number of people’s finger print cannot be enrolled in the system such person who use their hands aggressively such as construction workers or carpenter. So Failure to En-roll rate is the ratio of the number of the people whose biometric features cannot be enrolled to system to the number of the total person who use the system. In the 2.5 a practical test result has been shown where Failure to En-roll (FTE) has been measured for the different system (Mansfield Et Al.2001).

Failure to Acquire Rate (FTA): Sometimes the system cannot acquire data of the desired quality due to the readers/sensors, instrumental problem, environmental problem, noise level of data, background data etc. Simply Failure to Acquire Rate (FAR) represents those biometric sample which cannot get high quality score to go the decision making section.

False Acceptance Rate (FAR) and False Rejection Rate (FRR): these two terms are related to the False Match Rate and False Non Match Rate. False Acceptance Rate (FAR) and False Rejection Rate (FRR) are related to the whole biometric system. On the other hand the False Match Rate and the False Non Match rate are related to the single matching process. So in the case of FAR and FRR, Failure to Acquire Rate of the system should be included. According to Mansfield Et Al.2001, relationships can concluded as follow:

FAR (τ) = (1-FTA) FMR (τ)

FRR (τ) = (1-FTA) FNMR (τ) + FTA

Here, FAR- False Acceptance Rate

τ- Decision threshold

FTA- Failure to Acquire Rate

FMR- False Match Rate

FRR- False Rejection Rate

FNMR- False Non Matching Rate

Each point of the receiver operating characteristics (ROC) curves is corresponded to a definite threshold decision making score which has a particular False Rejection Rate and False Acceptance Rate. For the Forensic purpose, False Rejection Rate should be lowest and for the high security access purpose, False Acceptance Rate should be lowest.

Section Two: Biometric Technology

2.1 Physiological Biometric

In this section has mentioned about the pattern of fingerprint, hand geometry, pattern of iris, facial, retinal and vascular characteristics as a possible biometric identifier.

2.1.1 Fingerprint Pattern

Fingerprint is the oldest, popular and definitely the most widely publicly acceptable mature biometric identifiers. It perfectly meets the necessary criteria for of a biometric identifier like universality, distinctively, persistent and collectability.

They are impressions of the friction ridges on the surface of the hand. In the most application and in this thesis as well, the primary concern is focused on the ridges located above the end joints of fingers. However, in certain forensic applications, the area of importance is broader including the fingers, the palm and the writer’s palm (WOODWARD ET AL. 2003).

Since early 1970 Federal Bureau of Investigation (FBI) has initiated extensive research and development efforts on fingerprint identification. Their main aim was to invent an automated fingerprint identification system (AFIS), so that it could be helpful for forensic purposes (RUGGLES 1996).

2.1.1.1 Feature and Technology

There are two main elements in fingerprint matching technique: firstly minutiae matching and secondly pattern matching.

In the bellows shows regarding the primary technique that analyzes basic minutia types:

Macroscopic overview, universal pattern matching, focus on the integral flow of ridges -these could be categorized into three groups: loops, whorls and arches. Every individual fingerprint should be fit into one of these three categories that shown in the bellow’s

Now a day most of the application depends on the minutiae matching. If a fingerprint scan device capture a typical fingerprint image then there could be identify around 30 to 60 minutia patterns. Federal Bureau of Investigation (FBI) has confirmed that it is not possible for two individuals, even for monozygotic twins also to have more than eight common minutiae. For matching minutiae are examine with type, shape, co-ordinate location (x,y) and direction. In the bellows has shown about the automated minutiae matching process based on these attributes:

In the above describes a case in where the input image (in left) is trying to match against a stored template (in right). 39 minutiae were detected in the input, while the template contained 42 different minutiae. The matching algorithm identified 36 matching data points.

(Source: Prabhakar 2001)

In the above , inputted image (in left) has detected 64 minutiae while in the template (in right) contain 65 different minutiae. The algorithm identified 25 completely non-matching data points.

There need a scanning or capture device to obtain such images. Since 1970s, lots of researches have been done to develop and improve such devices. As a result optical, capacitive, ultrasonic, thermoelectric, radio frequency and touch less scanners has invented and now a day most of them become less expensive and available in the market.

Optical device / scanner: The first method to capture the fingerprint image was the optical scanning technique. Frustrated total internal reflection is the main principle of the operation of such scanner. In that case the finger is placed on the glass platen and illuminated by the laser light. The surface of the finger reflects certain amounts of light depending on the depth of the ridges and valleys and then reflectance is captured by a CCD (charge-coupled device) camera that constitutes of an array of light sensitive diodes called photosites (O’GORMAN 1999).

The big advantage of such device is they are cheaper among all of the automated biometric devices and also available in the local market. The disadvantage for such device is: it could be easily fooled by impostors. The latent fingerprint left on the scanning surface, it’s a big drawback of such device as anybody can collect the latent fingerprint image from there to spoof.

Optical Scanner “Digital Persona” has used to integrate the fingerprint scanning support for the product of that project are using popular U.are.U fingerprint recognition systems depicted in the below . In October 2003, the US Department of Defence has chosen digital persona scanner to secure network security at desktops in its offices in Washington, D.C. (digital persona 2009).

Capacitive Scanner / devices: since their first appearance in 1990, such devices have become very popular. A capacitive scanner is a solid-state device, which incorporates a sensing surface composed of an array of about 100.000 conductive plates over which lies a dielectric surface. When a user touches the sensor, the human skin acts as the other side of the array of capacitors. The measurement of voltage at a capacitor decreases with the growing distance between the plates. Therefore, the capacitance measured at the ridges of a fingerprint will be higher than the capacitance measured at the valleys. These measurements are then analyzed in a way similar to a sonar scan of the ocean bottom, resulting in a video signal depicting the surface of the fingerprint (O’GORMAN 1999).

The advantage of capacitive scanners is its very high accuracy rate. Another big advantages that they are much harder to fool than optical scanners since the process requires living tissue. As the users need to touch the silicon chip itself, solid-state scanners are susceptible to electrostatic discharge (ESD). Recent chip designs were specifically developed to withstand high levels of ESD and frequent handling. modern capacitive device manufacturer like Veridicom claims that their chips will survive around 1 million touches (Ryan 2002).

Thermoelectric device: It is silicon based. It measures the difference of temperature between the ridges touching the surface of the sensor and the valleys distant from them (O’Gorman 1999).

Although thermal scanning is very promising but it is still an uncommon method. A company named Atmel proponents of this technique. It uses finger sweep method to capture fingerprint in a tiny si

Identifying Clusters in High Dimensional Data

“Ask those who remember, are mindful if you do not know).” (Holy Qur’an, 6:43)

Removal Of Redundant Dimensions To Find Clusters In N-Dimensional Data Using Subspace Clustering

Abstract

The data mining has emerged as a powerful tool to extract knowledge from huge databases. Researchers have introduced several machine learning algorithms to explore the databases to discover information, hidden patterns, and rules from the data which were not known at the data recording time. Due to the remarkable developments in the storage capacities, processing and powerful algorithmic tools, practitioners are developing new and improved algorithms and techniques in several areas of data mining to discover the rules and relationship among the attributes in simple and complex higher dimensional databases. Furthermore data mining has its implementation in large variety of areas ranging from banking to marketing, engineering to bioinformatics and from investment to risk analysis and fraud detection. Practitioners are analyzing and implementing the techniques of artificial neural networks for classification and regression problems because of accuracy, efficiency. The aim of his short research project is to develop a way of identifying the clusters in high dimensional data as well as redundant dimensions which can create a noise in identifying the clusters in high dimensional data. Techniques used in this project utilizes the strength of the projections of the data points along the dimensions to identify the intensity of projection along each dimension in order to find cluster and redundant dimension in high dimensional data.

1 Introduction

In numerous scientific settings, engineering processes, and business applications ranging from experimental sensor data and process control data to telecommunication traffic observation and financial transaction monitoring, huge amounts of high-dimensional measurement data are produced and stored. Whereas sensor equipments as well as big storage devices are getting cheaper day by day, data analysis tools and techniques wrap behind. Clustering methods are common solutions to unsupervised learning problems where neither any expert knowledge nor some helpful annotation for the data is available. In general, clustering groups the data objects in a way that similar objects get together in clusters whereas objects from different clusters are of high dissimilarity. However it is observed that clustering disclose almost no structure even it is known there must be groups of similar objects. In many cases, the reason is that the cluster structure is stimulated by some subsets of the space’s dimensions only, and the many additional dimensions contribute nothing other than making noise in the data that hinder the discovery of the clusters within that data. As a solution to this problem, clustering algorithms are applied to the relevant subspaces only. Immediately, the new question is how to determine the relevant subspaces among the dimensions of the full space. Being faced with the power set of the set of dimensions a brute force trial of all subsets is infeasible due to their exponential number with respect to the original dimensionality.

In high dimensional data, as dimensions are increasing, the visualization and representation of the data becomes more difficult and sometimes increase in the dimensions can create a bottleneck. More dimensions mean more visualization or representation problems in the data. As the dimensions are increased, the data within those dimensions seems dispersing towards the corners / dimensions. Subspace clustering solves this problem by identifying both problems in parallel. It solves the problem of relevant subspaces which can be marked as redundant in high dimensional data. It also solves the problem of finding the cluster structures within that dataset which become apparent in these subspaces. Subspace clustering is an extension to the traditional clustering which automatically finds the clusters present in the subspace of high dimensional data space that allows better clustering the data points than the original space and it works even when the curse of dimensionality occurs. The most of the clustering algorithms have been designed to discover clusters in full dimensional space so they are not effective in identifying the clusters that exists within subspace of the original data space. The most of the clustering algorithms produces clustering results based on the order in which the input records were processed [2].

Subspace clustering can identify the different cluster within subspaces which exists in the huge amount of sales data and through it we can find which of the different attributes are related. This can be useful in promoting the sales and in planning the inventory levels of different products. It can be used for finding the subspace clusters in spatial databases and some useful decisions can be taken based on the subspace clusters identified [2]. The technique used here for indentifying the redundant dimensions which are creating noise in the data in order to identifying the clusters consist of drawing or plotting the data points in all dimensions. At second step the projection of all data points along each dimension are plotted. At the third step the unions of projections along each dimension are plotted using all possible combinations among all no. of dimensions and finally the union of all projection along all dimensions and analyzed, it will show the contribution of each dimension in indentifying the cluster which will be represented by the weight of projection. If any of the given dimension is contributing very less in order to building the weight of projection, that dimension can be considered as redundant, which means this dimension is not so important to identify the clusters in given data. The details of this strategy will be covered in later chapters.

2 Data Mining

2.1 – What is Data Mining?

Data mining is the process of analyzing data from different perspective and summarizing it for getting useful information. The information can be used for many useful purposes like increasing revenue, cuts costs etc. The data mining process also finds the hidden knowledge and relationship within the data which was not known while data recording. Describing the data is the first step in data mining, followed by summarizing its attributes (like standard deviation & mean etc). After that data is reviewed using visual tools like charts and graphs and then meaningful relations are determined. In the data mining process, the steps of collecting, exploring and selecting the right data are critically important. User can analyze data from different dimensions categorize and summarize it. Data mining finds the correlation or patterns amongst the fields in large databases.

Data mining has a great potential to help companies to focus on their important information in their data warehouse. It can predict the future trends and behaviors and allows the business to make more proactive and knowledge driven decisions. It can answer the business questions that were traditionally much time consuming to resolve. It scours databases for hidden patterns for finding predictive information that experts may miss it might lies beyond their expectations. Data mining is normally used to transform the data into information or knowledge. It is commonly used in wide range of profiting practices such as marketing, fraud detection and scientific discovery. Many companies already collect and refine their data. Data mining techniques can be implemented on existing platforms for enhance the value of information resources. Data mining tools can analyze massive databases to deliver answers to the questions.

Some other terms contains similar meaning from data mining such as “Knowledge mining” or “Knowledge Extraction” or “Pattern Analysis”. Data mining can also be treated as a Knowledge Discovery from Data (KDD). Some people simply mean the data mining as an essential step in Knowledge discovery from a large data. The process of knowledge discovery from data contains following steps.

* Data cleaning (removing the noise and inconsistent data)

* Data Integration (combining multiple data sources)

* Data selection (retrieving the data relevant to analysis task from database)

* Data Transformation (transforming the data into appropriate forms for mining by performing summary or aggregation operations)

* Data mining (applying the intelligent methods in order to extract data patterns)

* Pattern evaluation (identifying the truly interesting patterns representing knowledge based on some measures)

* Knowledge representation (representing knowledge techniques that are used to present the mined knowledge to the user)

2.2 – Data

Data can be any type of facts, or text, or image or number which can be processed by computer. Today’s organizations are accumulating large and growing amounts of data in different formats and in different databases. It can include operational or transactional data which includes costs, sales, inventory, payroll and accounting. It can also include nonoperational data such as industry sales and forecast data. It can also include the meta data which is, data about the data itself, such as logical database design and data dictionary definitions.

2.3 – Information

The information can be retrieved from the data via patterns, associations or relationship may exist in the data. For example the retail point of sale transaction data can be analyzed to yield information about the products which are being sold and when.

2.4 – Knowledge

Knowledge can be retrieved from information via historical patterns and the future trends. For example the analysis on retail supermarket sales data in promotional efforts point of view can provide the knowledge buying behavior of customer. Hence items which are at most risk for promotional efforts can be determined by manufacturer easily.

2.5 – Data warehouse

The advancement in data capture, processing power, data transmission and storage technologies are enabling the industry to integrate their various databases into data warehouse. The process of centralizing and retrieving the data is called data warehousing. Data warehousing is new term but concept is a bit old. Data warehouse is storage of massive amount of data in electronic form. Data warehousing is used to represent an ideal way of maintaining a central repository for all organizational data. Purpose of data warehouse is to maximize the user access and analysis. The data from different data sources are extracted, transformed and then loaded into data warehouse. Users / clients can generate different types of reports and can do business analysis by accessing the data warehouse.

Data mining is primarily used today by companies with a strong consumer focus – retail, financial, communication, and marketing organizations. It allows these organizations to evaluate associations between certain internal & external factors. The product positioning, price or staff skills can be example of internal factors. The external factor examples can be economic indicators, customer demographics and competition. It also allows them to calculate the impact on sales, corporate profits and customer satisfaction. Furthermore it allows them to summarize the information to look detailed transactional data. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by its capabilities.

Data mining usually automates the procedure of searching predictive information in huge databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data very quickly. The targeted marketing can be an example of predictive problem. Data mining utilizes data on previous promotional mailings in order to recognize the targets most probably to increase return on investment as maximum as possible in future mailings. Tools used in data mining traverses through huge databases and discover previously unseen patterns in single step. Analysis on retail sales data to recognize apparently unrelated products which are usually purchased together can be an example of it. The more pattern discovery problems can include identifying fraudulent credit card transactions and identifying irregular data that could symbolize data entry input errors. When data mining tools are used on parallel processing systems of high performance, they are able to analyze huge databases in very less amount of time. Faster or quick processing means that users can automatically experience with more details to recognize the complex data. High speed and quick response makes it actually possible for users to examine huge amounts of data. Huge databases, in turn, give improved and better predictions.

2.6 – Descriptive and Predictive Data Mining

Descriptive data mining aims to find patterns in the data that provide some information about what the data contains. It describes patterns in existing data, and is generally used to create meaningful subgroups such as demographic clusters. For example descriptions are in the form of Summaries and visualization, Clustering and Link Analysis. Predictive Data Mining is used to forecast explicit values, based on patterns determined from known results. For example, in the database having records of clients who have already answered to a specific offer, a model can be made that predicts which prospects are most probable to answer to the same offer. It is usually applied to recognize data mining projects with the goal to identify a statistical or neural network model or set of models that can be used to predict some response of interest. For example, a credit card company may want to engage in predictive data mining, to derive a (trained) model or set of models that can quickly identify transactions which have a high probability of being fraudulent. Other types of data mining projects may be more exploratory in nature (e.g. to determine the cluster or divisions of customers), in which case drill-down descriptive and tentative methods need to be applied. Predictive data mining is goad oriented. It can be decomposed into following major tasks.

* Data Preparation

* Data Reduction

* Data Modeling and Prediction

* Case and Solution Analysis

2.7 – Text Mining

The Text Mining is sometimes also called Text Data Mining which is more or less equal to Text Analytics. Text mining is the process of extracting/deriving high quality information from the text. High quality information is typically derived from deriving the patterns and trends through means such as statistical pattern learning. It usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. The High Quality in text mining usually refers to some combination of relevance, novelty, and interestingness. The text categorization, concept/entity extraction, text clustering, sentiment analysis, production of rough taxonomies, entity relation modeling, document summarization can be included as text mining tasks.

Text Mining is also known as the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. Linking together of the extracted information is the key element to create new facts or new hypotheses to be examined further by more conventional ways of experimentation. In text mining, the goal is to discover unknown information, something that no one yet knows and so could not have yet written down. The difference between ordinary data mining and text mining is that, in text mining the patterns are retrieved from natural language text instead of from structured databases of facts. Databases are designed and developed for programs to execute automatically; text is written for people to read. Most of the researchers think that it will need a full fledge simulation of how the brain works before that programs that read the way people do could be written.

2.8 – Web Mining

Web Mining is the technique which is used to extract and discover the information from web documents and services automatically. The interest of various research communities, tremendous growth of information resources on Web and recent interest in e-commerce has made this area of research very huge. Web mining can be usually decomposed into subtasks.

* Resource finding: fetching intended web documents.

* Information selection and pre-processing: selecting and preprocessing specific information from fetched web resources automatically.

* Generalization: automatically discovers general patterns at individual and across multiple website

* Analysis: validation and explanation of mined patterns.

Web Mining can be mainly categorized into three areas of interest based on which part of Web needs to be mined: Web Content Mining, Web Structure Mining and Web Usage Mining. Web Contents Mining describes the discovery of useful information from the web contents, data and documents [10]. In past the internet consisted of only different types of services and data resources. But today most of the data is available over the internet; even digital libraries are also available on Web. The web contents consist of several types of data including text, image, audio, video, metadata as well as hyperlinks. Most of the companies are trying to transform their business and services into electronic form and putting it on Web. As a result, the databases of the companies which were previously residing on legacy systems are now accessible over the Web. Thus the employees, business partners and even end clients are able to access the company’s databases over the Web. Users are accessing the applications over the web via their web interfaces due to which the most of the companies are trying to transform their business over the web, because internet is capable of making connection to any other computer anywhere in the world [11]. Some of the web contents are hidden and hence cannot be indexed. The dynamically generated data from the results of queries residing in the database or private data can fall in this area. Unstructured data such as free text or semi structured data such as HTML and fully structured data such as data in the tables or database generated web pages can be considered in this category. However unstructured text is mostly found in the web contents. The work on Web content mining is mostly done from 2 point of views, one is IR and other is DB point of view. “From IR view, web content mining assists and improves the information finding or filtering to the user. From DB view web content mining models the data on the web and integrates them so that the more sophisticated queries other than keywords could be performed. [10].

In Web Structure Mining, we are more concerned with the structure of hyperlinks within the web itself which can be called as inter document structure [10]. It is closely related to the web usage mining [14]. Pattern detection and graphs mining are essentially related to the web structure mining. Link analysis technique can be used to determine the patterns in the graph. The search engines like Google usually uses the web structure mining. For example, the links are mined and one can then determine the web pages that point to a particular web page. When a string is searched, a webpage having most number of links pointed to it may become first in the list. That’s why web pages are listed based on rank which is calculated by the rank of web pages pointed to it [14]. Based on web structural data, web structure mining can be divided into two categories. The first kind of web structure mining interacts with extracting patterns from the hyperlinks in the web. A hyperlink is a structural component that links or connects the web page to a different web page or different location. The other kind of the web structure mining interacts with the document structure, which is using the tree-like structure to analyze and describe the HTML or XML tags within the web pages.

With continuous growth of e-commerce, web services and web applications, the volume of clickstream and user data collected by web based organizations in their daily operations has increased. The organizations can analyze such data to determine the life time value of clients, design cross marketing strategies etc. [13]. The Web usage mining interacts with data generated by user’s clickstream. “The web usage data includes web server access logs, proxy server logs, browser logs, user profile, registration data, user sessions, transactions, cookies, user queries, bookmark data, mouse clicks and scrolls and any other data as a result of interaction” [10]. So the web usage mining is the most important task of the web mining [12]. Weblog databases can provide rich information about the web dynamics. In web usage mining, web log records are mined to discover the user access patterns through which the potential customers can be identified, quality of internet services can be enhanced and web server performance can be improved. Many techniques can be developed for implementation of web usage mining but it is important to know that success of such applications depends upon what and how much valid and reliable knowledge can be discovered the log data. Most often, the web logs are cleaned, condensed and transformed before extraction of any useful and significant information from weblog. Web mining can be performed on web log records to find associations patterns, sequential patterns and trend of web accessing. The overall Web usage mining process can be divided into three inter-dependent stages: data collection and pre-processing, pattern discovery, and pattern analysis [13]. In the data collection & preprocessing stage, the raw data is collected, cleaned and transformed into a set of user transactions which represents the activities of each user during visits to the web site. In the pattern discovery stage, statistical, database, and machine learning operations are performed to retrieve hidden patterns representing the typical behavior of users, as well as summary of statistics on Web resources, sessions, and users.

3 Classification
3.1 – What is Classification?

As the quantity and the variety increases in the available data, it needs some robust, efficient and versatile data categorization technique for exploration [16]. Classification is a method of categorizing class labels to patterns. It is actually a data mining methodology used to predict group membership for data instances. For example, one may want to use classification to guess whether the weather on a specific day would be “sunny”, “cloudy” or “rainy”. The data mining techniques which are used to differentiate similar kind of data objects / points from other are called clustering. It actually uses attribute values found in the data of one class to distinguish it from other types or classes. The data classification majorly concerns with the treatment of the large datasets. In classification we build a model by analyzing the existing data, describing the characteristics of various classes of data. We can use this model to predict the class/type of new data. Classification is a supervised machine learning procedure in which individual items are placed in a group based on quantitative information on one or more characteristics in the items. Decision Trees and Bayesian Networks are the examples of classification methods. One type of classification is Clustering. This is process of finding the similar data objects / points within the given dataset. This similarity can be in the meaning of distance measures or on any other parameter, depending upon the need and the given data.

Classification is an ancient term as well as a modern one since classification of animals, plants and other physical objects is still valid today. Classification is a way of thinking about things rather than a study of things itself so it draws its theory and application from complete range of human experiences and thoughts [18]. From a bigger picture, classification can include medical patients based on disease, a set of images containing red rose from an image database, a set of documents describing “classification” from a document/text database, equipment malfunction based on cause and loan applicants based on their likelihood of payment etc. For example in later case, the problem is to predict a new applicant’s loan’s eligibility given old data about customers. There are many techniques which are used for data categorization / classification. The most common are Decision tree classifier and Bayesian classifiers.

3.2 – Types of Classification

There are two types of classification. One is supervised classification and other is unsupervised classification. Supervised learning is a machine learning technique for discovering a function from training data. The training data contains the pairs of input objects, and their desired outputs. The output of the function can be a continuous value which can be called regression, or can predict a class label of the input object which can be called as classification. The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of input and target output). To achieve this goal, the learner needs to simplify from the presented data to hidden situations in a meaningful way.

The unsupervised learning is a class of problems in machine learning in which it is needed to seek to determine how the data are organized. It is distinguished from supervised learning in that the learner is given only unknown examples. Unsupervised learning is nearly related to the problem of density estimation in statistics. However unsupervised learning also covers many other techniques that are used to summarize and explain key features of the data. One form of unsupervised learning is clustering which will be covered in next chapter. Blind source partition based on Independent Component Analysis is another example. Neural network models, adaptive resonance theory and the self organizing maps are most commonly used unsupervised learning algorithms. There are many techniques for the implementation of supervised classification. We will be discussing two of them which are most commonly used which are Decision Trees classifiers and Naïve Bayesian Classifiers.

3.2.1 – Decision Trees Classifier

There are many alternatives to represent classifiers. The decision tree is probably the most widely used approach for this purpose. It is one of the most widely used supervised learning methods used for data exploration. It is easy to use and can be represented in if-then-else statements/rules and can work well in noisy data as well [16]. Tree like graph or decisions models and their possible consequences including resource costs, chance event, outcomes, and utilities are used in decision trees. Decision trees are most commonly used in specifically in decision analysis, operations research, to help in identifying a strategy most probably to reach a target. In machine learning and data mining, a decision trees are used as predictive model; means a planning from observations & calculations about an item to the conclusions about its target value. More descriptive names for such tree models are classification tree or regression tree. In these tree structures, leaves are representing classifications and branches are representing conjunctions of features those lead to classifications. The machine learning technique for inducing a decision tree from data is called decision tree learning, or decision trees. Decision trees are simple but powerful form of multiple variable analyses [15]. Classification is done by tree like structures that have different test criteria for a variable at each of the nodes. New leaves are generated based on the results of the tests at the nodes. Decision Tree is a supervised learning system in which classification rules are constructed from the decision tree. Decision trees are produced by algorithms which identify various ways splitting data set into branch like segment. Decision tree try to find out a strong relationship between input and target values within the dataset [15].

In tasks classification, decision trees normally visualize that what steps should be taken to reach on classification. Every decision tree starts with a parent node called root node which is considered to be the “parent” of every other node. Each node in the tree calculates an attribute in the data and decides which path it should follow. Typically the decision test is comparison of a value against some constant. Classification with the help of decision tree is done by traversing from the root node up to a leaf node. Decision trees are able to represent and classify the diverse types of data. The simplest form of data is numerical data which is most familiar too. Organizing nominal data is also required many times in many situations. Nominal quantities are normally represented via discrete set of symbols. For example weather condition can be described in either nominal fashion or numeric. Quantification can be done about temperature by saying that it is eleven degrees Celsius or fifty two degrees Fahrenheit. The cool, mild, cold, warm or hot terminologies can also be sued. The former is a type of numeric data while and the latter is an example of nominal data. More precisely, the example of cool, mild, cold, warm and hot is a special type of nominal data, expressed as ordinal data. Ordinal data usually has an implicit assumption of ordered relationships among the values. In the weather example, purely nominal description like rainy, overcast and sunny can also be added. These values have no relationships or distance measures among each other.

Decision Trees are those types of trees where each node is a question, each branch is an answer to a question, and each leaf is a result. Here is an example of Decision tree.

Roughly, the idea is based upon the number of stock items; we have to make different decisions. If we don’t have much, you buy at any cost. If you have a lot of items then you only buy if it is inexpensive. Now if stock items are less than 10 then buy all if unit price is less than 10 otherwise buy only 10 items. Now if we have 10 to 40 items in the stock then check unit price. If unit price is less than 5£ then buy only 5 items otherwise no need to buy anything expensive since stock is good already. Now if we have more than 40 items in the stock, then buy 5 if and only if price is less than 2£ otherwise no need to buy too expensive items. So in this way decision trees help us to make a decision at each level. Here is another example of decision tree, representing the risk factor associated with the rash driving.

The root node at the top of the tree structure is showing the feature that is split first for highest discrimination. The internal nodes are showing decision rules on one or more attributes while leaf nodes are class labels. A person having age less than 20 has very high risk while a person having age greater than 30 has a very low risk. A middle category; a person having age greater than 20 but less than 30 depend upon another attribute which is car type. If car type is of sports then there is again high risk involved while if family car is used then there is low risk involved.

In the field of sciences & engineering and in the applied areas including business intelligence and data mining, many useful features are being introduced as the result of evolution of decision trees.

* With the help of transformation in decision trees, the volume of data can be reduced into more compact form that preserves the major characteristic

Computer Forensics Investigation and Techniques

Introduction

I am the student of International Advanced Diploma in Computer Studies (IADCS). In this course, I have to do Compute Forensic assignment. The assignment title is “Didsbury Mobile Entertainments LTD”. This assignment helps me understanding computer forensics investigation and techniques

Before this assignment, although I am interested in computer forensic, I am hardly used computer forensics toolkit or done any investigation. Because of this assignment, I have learnt many techniques how to investigate computer and done it practically. So, by doing this assignment, I have gained in practical and much valuable knowledge in Computer Forensics.nd a heartfelt thanks to all the people in Myanma Computer Company Ltd. for their warmly welcome during the period of the IADCS course and this assignment developed.

Task 1

i) Report

DIDSBURY MOBILE ENTERTAINMENTS LTD

No(5), Duku place, Singapore Jan 10, 2010

Introduction

Computer forensics involves obtaining and analyzing digital information for figuring out what happened, when it happened, how it happened and who was involved. What is more, it is use as evidence in civil, criminal, or administrative cases.

Reasons for a need for computer forensic investigation

Computer forensics investigation can recover thousands of deleted mails, can know when the user log into the system and what he does, can determine the motivation and intent of the user, can search keywords in a hard drive in different languages and can gain evidence against an employee that an organization wished to terminate. For these reasons, in order to know whether Jalitha has been spending her time on her friend business or not, we need a computer forensic investigation.

Steps to pursue the investigation

In order to pursue the investigation, I would take the following steps:

1) Secure the computer system to ensure that the equipment and data are safe

2) Find every file on the computer system, including files that are encrypted, protected by passwords, hidden or deleted, but not yet overwritten.

3) Copy all files and work on this copy files as accessing a file can alter its original value

4) Start a detailed journal with the date and time and date/information discovered

5) Collect email, DNS, and other network service logs

6) Analyze with various computer forensics tools and software

7) Print out an overall analysis

8) Evaluating the information/data recovered to determine the case

Conclusion

After we know the reasons and steps for investigation, then we should move on to conduct the investigation. However, we should note that the first step of investigation is critical as if the system is not secure, then the evidence or data we found may not be admissible.

ii – a) Report for “The procedures to make sure the evidence holds up in court”

DIDSBURY MOBILE ENTERTAINMENTS LTD

No(5), Duku place, Singapore

Jan 12, 2010

Introduction

Evidence is any physical or electronic information (such as computer log files, data, reports, hardware, disk image, etc) that is collected during a computer forensic investigation. The purpose of gathering evidence is to help determine the source of the attack, and to introduce the evidence as testimony in a court of law.

Procedures to make sure the evidence holds up in court

In order to make the evidence admissible in court, we need to follow the following steps:

1) Before any evidence can be gathered, a warrant must be issued so that forensic specialist has the legal authority to seize, copy and examine the data

2) Have the responsibility to ensure that the law and the principles we used are met

3) Evidence must be obtained in a manner which ensures the authenticity and validity and that no tampering had taken place

4) Tracking the chain of custody is essential for preparing evidence as it shows the evidence was collected from the system in question, and was stored and managed without alteration.

5) Extracted/ relevant evidence is properly handled and protected from later mechanical or electromagnetic damage

6) Preventing viruses from being introduced to a computer during the analysis process

7) To ensure that original evidence must be described in complete details to present reliable evidence on the court

8) Must arrange to answer reliability questions relating to the software we have used

Conclusion

In gathering evidence, authenticity, reliability and chain of custody are important aspects to be considered. By following the above steps, we are proper in handling the evidence holds up in court.

ii – b) Evidence form

Didsbury Mobile Entertainments Ltd

IT Department

Computer Investigation

Case No.:

005

Investigation Organization:

Gold Star

Investigator:

Win Pa Pa Aye

Nature of Case:

Company’s policy violation case

Location where evidence was obtained:

On suspect’s office desk

Description of evidence:

Vendor Name

Model No./ Serial No.

Item #1

One CD

Sony

Item #2

A 4GB flash memory device

Kingston

05360-374.A00LF

Item #3

Evidence Recovered by:

Win Pa Pa Aye

Date & Time:

10.12.2009

10:00 AM

Evidence Placed in Locker:

E2419

Date & Time:

15.12.2009

11:00 AM

Item #

Evidence Processed by

Description of Evidence

Date/ Time

1

Win Pa Pa Aye

Fully recovered deleted email on the drive which is sent to Radasa’s company, including data exchange between the businesses.

13.12.2009

3:00 PM

2

Win Pa Pa Aye

Encrypted document hidden inside a bitmap file. Decrypted and saved on another media.

18.12.2009

9:00 AM

3

Win Pa Pa Aye

Password-protected document covering the exchange of information with her friend. Password cracked and file saved on another media.

22.12.2009

2:00 PM

Task 2

Report for “the way the data is stored, boot tasks and start up tasks for Windows and Linux systems”

To effectively investigate computer evidence, we must understand how the most popular operating systems work in general and how they store files in particular. The type of file system an operating system uses determines how data is stored on the disk. The file system is the general name given to the logical structures and software routines used to control access to the storage on a hard disk system and it is usually related to an operating system. To know the way the data is stored in Windows XP and Linux, we need to get into file systems of Windows and Linux.

The way the data is stored in Windows XP

In Windows XP, although it supports several different file systems, NTFS is the primary file system for Windows XP. So, we will have a look in NTFS as the NTFS system offers better performance and features than a FAT16 and FAT 32 system.

NTFS divides all useful places into clusters and supports almost all sizes of clusters – from 512 bytes up to 64 Kbytes. And NTFS disk is symbolically divided into two parts – MFT (Master File Table) area and files storage area. The MFT consumes about 12% of the disk and contains information about all files located on the disk. This includes the system file used by the operating system. MFT is divided into records of the fixed size (usually 1 Kbytes), and each record corresponds to some file. Records within the MFT are referred to as meta-data and the first 16 records are reserved for system files. For reliability, the first three records of MFT file is copied and stored exactly in the middle of the disk and the remaining can be stored anyplace of the disk. The remaining 88% of disk space is for file storage. Below is the partition structure of NTFS system.

After we know the file system of Windows XP, then we will move on to the file system of Linux.

The way the data is stored in Linux

When it comes to Linux file system, ext2 has been the default file system as it main advantages is its speed and extremely robust. However, there is a risk of data loss when sudden crashes occur and take long time to recover. Sometimes the recovery may also end up with corrupt files. By using the advantage of ext2 and add some data loss protection and recovery speed led to the development of journaling file system – ext3 and ReiserFs. Though ext2, ext3 and ReiserFs are the most popular file system, there are also some other file system used in the Linux world such as JSF and XFS.

As Linux views all file systems from the perspective of a common set of objects, there are four objects – superblock, inode, dentry and file. The superblock is a structure that represents a file system which includes vital information about the system. Moreover, it includes the file system name (such as ext2), the size of the file system and its state, a reference to the block device, and meta-data information. It also keeps track of all the nodes. Linux keeps multiple copies of the superblock in various locations on the disk to prevent losing such vital information.

Every object that is managed within a file system (file or directory) is represented in Linux as an inode. The inode contains all the meta-data to manage objects in the file system. Another set of structures, called dentries, is used to translate between names and inodes, for which a directory cache exists to keep the most-recently used around. The dentry also maintains relationships between directories and files for traversing file systems. Finally, a VFS (Virtual file system) file represents an open file (keeps state for the open file such as the write offset, and so on).

While the majority of the file system code exists in the kernel (except for user-space file systems), (2.3) shows the Linux file system from the point of view of high-level architecture and the relationships between the major file system-related components in both user space and the kernel.

The boot task and start up task of Windows XP

A good understanding of what happens to disk data at startup is also an important aspect as accessing to a computer system after it was used for illicit reasons can alter the disk evidence. First, we will discuss about the Windows XP startup and boot process, and then shift into the startup and boot process of Linux.

Like any other PC system, Windows XP startup by running the POST test, performing an initialization of its intelligent system devices, and performing a system boot process. The boot process begins when the BIOS starts looking through the system for a master boot record (MBR). This record can reside on drive C: or at any other location in the system. When the BIOS execute the master boot record on the hard drive, the MBR examines the disk’s partition table to locate the active partition. The boot process then moves to the boot sector of that partition located in the first sector of the active partition. There, it finds the code to begin loading the Secondary Bootstrap Loader from the root directory of the boot drive.

In NTFS partition, the bootstrap loader is named NTLDR and is responsible for loading XP operation system into memory. When the system is powered on, NTLDR reads the Boot.ini file. If boot.ini contains more than one operating system entry, a boot menu is displayed to the user, allowing the user to choose which operating system is to be loaded. Fig (2.4) shows Boot.ini contains two operating systems and allows user to choose.

After the user has selected the desired mode to boot to, NTLDR runs Ntoskrnl.exe and reads Bootvid.dll, Hal.dll and the startup device drivers. After the file system driver has loaded, control is then passed from NTLDR to the kernel. At this time, Windows XP display Windows logo.

Virtually, all applications we installed using the default installation decide that they should start up when windows starts. Under “Startup” tab in the system configuration utility, a list of programs that run when our system boots is listed. Fig (2.6) shows the listed program when our system boots.

The boot task and start up task of Linux

After we have get into the start up process of Windows XP, we will then shift into the startup process of Linux. In Linux, the flow of control during a boot is also from BIOS, to boot loader, to kernel. When you turn on the power, the BIOS perform hardware-platform specific startup tasks. Once the hardware is recognized and started correctly, the BIOS loads and executes the partition boot code from the designated boot device, which contains Linux boot loader.

Linux Loader (LILO) is the Linux utility that initiates the boot process, which usually runs from the disk’s MBR. LILO is a boot manager that allows you to start Linux or other operating systems, including Windows. If a system has two or more operating systems, LILO gives a prompt asking which operating system the user wishes to initialize.

When the user chooses the boot option, it then loads the choosing operating system into memory. The boot program, in turn, reads the kernel into memory. When the kernel is loaded, the boot program transfers control of the boot process to the kernel. The kernel then performs the majority of system setup (memory management, device initialization) before spawning separately, the idle process and scheduler and the init process which is executed in user space. The scheduler takes control of the system management. The init process executes scripts as needed that set up all non-operating system services and structures in order to allow a user environment to be created, and then presents the user with a login screen.

We have described about the way the data stored, the boot task and startup task of Windows XP and Linux. After a thorough study of these areas, we can acquire or handle the evidence properly.

Task 3

a) Features comparison of “EnCase, Access Data’s Forensic and ProDiscover”

Features of Guidance EnCase Forensic

* In courts worldwide, forensically acquire data in a sound manner using software with an unparallel record

* Using a single tool and investigate and analyze multiple platforms

* With prebuilt EnScript® modules such as initialized Case and Event Log analysis, it can automate complex and routine tasks, so it save time in analyzing

* Find information despite efforts to hide, cloak or delete

* Can easily handle large volumes of computer evidence, view all relevant files that includes deleted files, file slack and unallocated space

* Directly transfer evidence files to law enforcement or legal representatives as necessary

* Include review options that allow non-investigators to review evidence easily

* Include report options that enable quick report preparation

Features of Access Data’s Forensic Toolkit

* Provides integrated solution that is no need to purchase multiple tools to complete a case.

* Provides integrated database that avoid application crashes, lost work and product instability.

* Identify encrypted files automatically from more than 80 applications and crack those files.

* Supports international language that allows us easily search and view foreign-language data in our native format

* Include email analysis that can recover and analyze a wide range of email and web mail formats

* Can generate different industry-standard report formats quickly and concisely

* Collect key information from the registry that include user information, date of application installed, hardware, time zone and recently used information

* While processing takes place, we can view and analyze data

Features of ProDiscover

* To keep original evidence safe, it create bit-stream copy of disk for analyzing that includes hidden HPA section

* For complete disk forensic analysis, it search files or entire disk including slack space, HPA section and Windows NT/2000/XP alternate data streams

* Without alter data on the disk, it can preview all files including metadata and hidden or deleted files

* Support for VMware to run a captured image.

* In order to ensure nothing is hidden, it examine data at the file or cluster level

* To prove data integrity, it can generate and record MD5, SHA1 and SHA256 hashes automatically.

* Examine FAT12, FAT16, FAT 32 and all NTFS file systems including Dynamic Disk and Software RAID for maximum flexibility.

* Examine Sun Solaris UFS file system and Linux ext2 / ext3 file systems.

* Integrated thumbnail graphics, internet history, event log file, and registry viewers to facilitate investigation process.

* Integrated viewer to examine .pst /.ost and .dbx e-mail files.

* Utilize Perl scripts to automate investigation tasks.

* Extracts EXIF information from JPEG files to identify file creators.

* Automated report generation in XML format saves time, improves accuracy and compatibility.

* GUI interface and integrated help function assure quick start and ease of use.

* Designed to NIST Disk Imaging Tool Specification 3.1.6 to insure high quality.

AccessData FTK v2.0

Guidance EnCase Forensic 6.0

ProDiscover Forensic

Report for Choosing Access Data’s Forensic Toolkit

I think Access Data’s Forensic Toolkit is the most beneficial for our lab as it provides more forensic examination features than Encase and ProDiscover. In the evidence aspects, Access Data can acquire files and folders than others. So, it can be a powerful tool when we analyze files for evidence. Moreover, it uses database to support large volume of data that can avoid application crashes, lost work and product instability for our lab.

As Access Data is a GUI-based utility that can run in Windows XP, 2000, Me, or 9x operating system and it demo version has most of the same features as full-licensed version, use multi-threading to optimize CPU usage, has task scheduler to optimize time and can view and analyze data while processing takes place, it meets the requirements of our lab. What is more, it supports international language so we can retrieve data no matter which languages they are using.

On top of that, it is powerful in searching, recovery, email and graphic analysis. Because of these reasons and by viewing the above forensic tools comparison chart, I can conclude that Access Data’s Forensic Toolkit is the most beneficial for our lab.

b) Forensic Analysis

Report for “Analyzing FAT32, NTFS and CDFS file system Using Access Data’s FTK”

Task 4

a) MD5 hash values of bmp, doc, xls files

All hash values generated by the MD5 before modification is not the same with the hash value generated after modification.

b) Why hash values are same or different

A hash value is a numeric value of a fixed length that uniquely identifies data. Data can be compared to a hash value to determine its integrity. Data is hashed and the hash value is stored. At a later time or after the data has been received from mail, the data is hashed again and compared to the stored hash or the hash value it was sent to determine whether the data was altered.

In order to compare the hash values, the original hashed data must be encrypted or kept secret from all untrusted parties. When it compared, if the compared hashed values are the same, then the data has not been altered. If the file has been modified or corrupted, the MD5 produces different hash values.

In task 4 (a), first we created a doc file with data in this file, then we generated hash values of doc file with MD5. The hash value of info.doc file is da5fd802f47c9b5bbdced35b9a1202e6. After that, we made a modification to that info.doc file and regenerate the hash values. The hash value after modifying is 01f8badd9846f32a79a5055bfe98adeb. The hash value is completely different after modifying.

Then we created a cv.xls file and generated the hash value. Before modifying, the hash value is ef9bbfeec4d8e455b749447377a5e84f. After that we add one record to cv.xls file and regenerated hash values. After modifying, ccfee18e1e713cdd2fcf565298928673 hash value is produced. The hash value changed in cv.xls file after data altered.

Furthermore, we created fruit.bmp file to compare the hash value before and after modification. The hash value before modifying is 8d06bdfe03df83bb3942ce71daca3888 and after modifying is 667d82f0545f0d187dfa0227ea2c7ff6. So, the hash values comparison of bmp files is completely different after data has been modified.

When we encrypted the text file into each image file, the text file is not visible in the image viewing utility and each image file is like its original image file. However, the comparison of the hash values of each image file before and after inserting short messages is completely different. As each image file has been altered by inserting short message, the regenerated hash value is totally different from the original hash values.

On top of that, the original image file size has been changed after inserting short messages. The raster image file has slightly increased its file size after it has been modified. The raster image file size is increased from 50.5 KB to 50.7 KB. However, of the remaining three, two image files vector and metafile have decreased its file size a little sharply. The original file size of vector is 266 KB and has been decreased to 200 KB after modified. The metafile also decreased from 313 KB to 156 KB. Only the bitmap is remains stable as its file size does not increase or decrease.

In a nut shell, we can conclude that the hash value would change if the file has been modified. However, depending on the file format, the file size can increase, decrease or remain stable.

d) Report for “differences of bitmap, raster, vector and metafile”

A bitmap image is a computer file and it is collected with dots or pixels that form an image. The pixel of bitmap is stored like a grid, tiny square. When we use the paint program, we can see the bitmap pixel is like a block and it is draw or clear block by block. A raster image is also a collection of pixels but the image stored pixels in rows to make it easy to print. And raster image is resolution dependent. It cannot scale up to an arbitrary resolution without loss of apparent quality. This is overcome by the vector image.

Vector image is made up of many individual, scalable objects. These objects are defined by mathematical equations rather than pixels, so it always render at the highest quality. There are many attributes in vector like color, fill and outline. The attributes can be changed without destroying the basic object.

Metafile is a combination of raster and vector graphics, and can have the characteristics of both image types. However, if you create a metafile with raster and vector and enlarge it, the area of raster format will lose some resolution while the vector formatted area remains sharp and clear.

If we have lost an image file, before doing anything, we should be familiar with the data patterns of known image file types. Then the recovery process starts. The first step in recovery is to recover fragments file from slack space and free space. The fragment file can locate the header data that is partially overwritten. So, we use Drivespy to identify possible unallocated data sets that contain the full or partial image header values.

To locate and recover the image header, we need to know the absolute starting cluster and ending cluster. If not, we could collect the wrong data. Using Drivespy, we can know started cluster number and file size of image that we want to recover. To know the exact ending cluster, add the total number of clusters assigned to the starting cluster position. As we have known the size of image file, we can calculate the total number of clusters. Then, we can locate the image file and retrieve image header.

After we get the header value, open the file with Microsoft Photo Viewer. If the file has been opened successfully, then recovery of image file has been completed. If not, we need to use the Hex Workshop to examine the header of the file.

Task 5

Report for “Investigation that prove Naomi’s innocence”

Before we begin tracing an email, we should know which email is illegal and what constitutes an email crime. Illegal email includes selling narcotics, extortion, sexual harassment, stalking, fraud, child abductions, and child pornography.

As Jazebel has received an offensive email, so we need to access the victim computer and copy and print the offensive email to recover the evidence contained in the email. Microsoft Outlook, Outlook Express or any other GUI email programs supports for copying the email from inbox to the place that we want to by dragging the message to the storage place. When copying email, the header of the email must be included as it contains unique identifying numbers, such as IP address of the server that sent the message. This helps us when tracing the email.

After copy and printing the message, we should retrieve the email header to get the sender IP address. Right click on the message and choose message options to retrieve the email header. The following shows the header information that retrieved from the mail of the victim computer.

At line 1(10.140.200.11) shows the IP address of the server sending the e-mail, and provides a date and time that the offending e-mails was sent. Although when we see at line 5, the victim is seemed to be Jezebel, however, line 1 identifies that the e-mail that is sent from the IP address (10.140.200.11) is the same as the victim’s computer IP address. So, we can conclude that Naomi does not include in sending offensive e-mail. She is innocence and the victim, Jezebel himself, is the one who send the offensive e-mails.

References:

Computer Forensics Textbook

http://www.computerforensicsworld.com/index.php

http://www.crime-research.org/library/Forensics.htm

http://ixbtlabs.com/articles/ntfs/

www.wikipedia.com

Analysis of the Security Management Market in Hong Kong

The Security Management Industry

INTRODUCTION

Security management is the combination of hardware, software, and services that normalizes, aggregates, correlates, and visualizes data from disparate security products. Security management is a broad term that encompasses several currently distinct market segments.

With the presence of the Internet, spam is becoming increasingly costly and dangerous as spammers deliver more virulent payloads through email attachments. According to a recent IDC (2004) study, the volume of spam messages sent daily worldwide jumped from 7 billion in 2002 to 23 billion in 2004.

The Hong Kong Population has increasingly Internet users. This boom in the electronic commerce creates ease in communication and on business transactions however this has also compromised the internal data security with the presence of hackers. Industry analysts believe that increased spending on internet security products and the establishment of a corporate data security policy is equally important in avoiding information leakage. Estimated information security spending in Hong Kong will reach USD 231 million in 2003 and will maintain a stable growth to reach USD 252 million in 2004. U.S. security products enjoy an excellent reputation in Hong Kong and should continue to dominate the market.

According to Braunberg (2004), “a major early driver for security management products is the need to get a handle on event data emanating from intrusion detection systems. Many security management products are chiefly concerned with the consolidation, correlation and prioritization of this type of data. These event management and correlation products address the volume of data and its heterogeneous origin, both in terms of devices and vendors.”

SECURITY MANAGEMENT MARKET IN HONG KONG

Market Highlights

The continuous increase in demand for communication internationally, internet has been increasingly in demand. With the Internet in business transactions, companies expanded sales opportunities through e-commerce and reduce business costs. With the presence of Internet, companies can broadly expand customer base.

However, in spite of all these benefits that companies experienced with Internet, it has also brought some costs to companies. Internet opens up network and servers to external and internal attacks. In order to guard against these attacks, Hong Kong companies have increasingly felt the need to purchase Internet security.

According to the report of HKCERT (2004), the number of PC’s installed in Hong Kong has skewed to the low end. In the survey conducted, it shows that 63.5% of the surveyed companies had installed 1-9 PCs and only 1.3% had installed 100 PCs or above.

Consumer Analysis

In the report of HKCERT (2002), industry players estimated that the Hong Kong market for internet security products and services in 2001 was USD 231 million and will reach USD 252 million in 2004. Generally U.S. internet security products are the major players and are enjoying an excellent reputation in Hong Kong and are continually dominating the market.

Industry Estimates

The survey of HKCERT in 2004 showed that Hong Kong companies adopted security technologies to secure their computer form attacks. The survey includes 3,000 companies from different industry sectors in Hong Kong. According to the survey “anti-virus software” was the most popular security measure, being used by 90.9% of the companies interviewed in 2004. “Physical security” (65.5%), “Firewall” (65.4%) and “Password” (60.6%) were the next three common security measures adopted (HKCERT, 2004). The information security awareness of the companies in Hong Kong has increased considerably as the percentage of companies without any security measures in place dropped from 10.1% in 2003 to 3.6% in 2004 (HKCERT, 2004)

As the survey shows, the use of firewall has significantly increased in 2004. This is due to the increasing awareness of a number of companies that the basic security tools can not completely stop virus and because software vendors pay great effort in promoting their products.

From the table above, US rank number one in the list showing that US is the major host of malware in 2006. On the other hand, Hong Kong only is on the 9th place however it is still a major contributor of malware in the world.

Sophos notes that up to 90% of all spam is now relayed from zombie computers, hi-jacked by Trojan horses, worms and viruses under the control of hackers. This means that they do not need to be based in the same country as the computers being used to send the spam (IET, 2007).

Sophos found that the most prolific email threats during 2006 were the Mytob, Netsky, Sober and Zafi families of worms, which together accounted for more than 75% of all infected email (IET, 2007).

According to the report, email will continue to be an important vector for malware authors, though the increasing adoption of email gateway security is making hackers turn to other routes for infection (IET, 2007). Malware infection will continue to affect many websites. SophosLabs is uncovering an average of 5,000 new URLs hosting malicious code each day (IET, 2007).

In 2006, it has been discovered that there is a decrease in use of spyware due to multiple Trojan downloaders. “Statistics reveal that in January 2006 spyware accounted for 50.43% of all infected email, while 40.32% were emails linking to websites containing Trojan downloaders. By December 2006 the figures had been reversed, with the latter now accounting for 51.24%, and spyware-infected emails reduced to 41.87%.”(IET, 2007)

Market Channels

“In Hong Kong, consumer-oriented products such as anti-virus, overseas companies usually market their products via local distributors who will then channel the products to resellers and in some cases directly to retailers. For enterprise-oriented products, which require value-added services such as system integration and after-sales support, overseas companies can go through local distributors and/or resellers. “(Chau, 2003)

Competitive Analysis

The internet security market has four segments: anti-virus, firewall, encryption software, and Security Authentication, Authorization & Administration.

Anti-virus Software

Anti-virus software identifies and/or eliminates harmful software and macros. Anti-virus are mostly software based. The major players in Hong Kong for the consumer market includes Symantec/Norton which possesses 50% of the market share in Hong Kong, Norman, Nai/McAfee, and Trend Micro which are basically US origin (Chau, 2003). According to Chau (2003), consumers of Anti-virus are generally price sensitive and usually seek for products with established brand name.

In the enterprise market of anti-virus, the major players include Trend Micro, NAI/McAfee, Norman and Symantec (Chau, 2003). According to the analysis, enterprise users will usually seek professional opinions from their I.T. service provider and are more likely to focus on brand reputation and offered features and pricing is not the main concern, although with the downturn in the economy, companies are becoming more price-sensitive (Chau, 2003)

Firewall

Firewall software/hardware identifies and blocks access to certain applications and data. There are two categories of firewall products: software and hardware. The players in Hong Kong’s software firewall market are Check Point Software which dominates the market of 60% market share, Computer Associates, Symantec and Secure Computing (Chau, 2003).

In the hardware firewall market, the major players are Netscreen with 50% market share, Cisco (PIX) with 20% market share, Sonic Wall, Watchguard and Nokie of Finland (Chau, 2003).

According to the report, “the price for software firewalls averages USD 20 per user. On the hardware firewalls side, the number of users and the kinds of features determine the price. A low-end firewall server costs USD 600 to USD 700, a mid-range server costs USD 2,000 to USD 4,000, and a high-end server costs USD 10,000 and above. Netscreen and Sonic Wall are quite common in small to medium-sized enterprises. Cisco targets large corporations. Brand reputation and price are the prime concerns for buyers. According to industry players, there is an increasing preference for hardware firewalls over software firewalls because the hardware firewall has a speed advantage and is easier to maintain.” (Chau, 2003)

Encryption

Encryption software is a security product that uses crypto-graphical algorithms to protect the confidentiality of data, applications, and user identities. According to the study, “the most commonly-used standards in Hong Kong are SSH, SSL, PGP, RSA, and DES. Different standards are used for different objectives. SSH is mostly used to secure TCP connections between remote sites. SSL is commonly used in web browsers to secure web traffic. PGP is used for email encryption. RSA is for PKI system authentication and authorization. DES or 3DES are commonly used in the banking sector.” (Chau, 2003)

According to the report of Chau (2003), the major players in encryption in Hong are PGP, Utimaco, F-Secure, SSH (Security Shell), and RSA.

Security 3A Software

Security 3A (administration, authorization, and authentication) software is used for administering security on computer systems and includes the processes of defining, creating, changing, deleting, and auditing users.

Authentication software is used for verifying users’ identities and avoiding repudiation. Authorization software determines data access according to corporate policy. Administrative software includes internet access control, email scanning, intrusion detection and vulnerability assessment, and security management. The major players in PKI system in Hong Kong are Baltimore of UK, Verisign, and Entrust (Chau, 2003).

Intrusion Detection Systems (IDS)

An intrusion detection system (IDS) examines system or network activity to find possible intrusions or attacks. Intrusion detection systems are either network-based or host-based. Network-based IDS are more common.

According to the report of Chau (2003), the major players of IDS in Hong Kong are ISS (Real Secure) which dominate in the market of 65% market share, Enterasys (Dragon), Symantec (Intruder Alert), Tripwire (Tripwire), Computer Associates (Entrust Intrusion Protection) and Cisco (Secure IDS). In the analysis it has been known that IDS end-users are mostly medium to large enterprises and the most significant purchasing criteria for end users are reliability and compatibility and price is not a key factor (Chau, 2003).

Content Security Products

The major players of content security products includes Clearswift which has 50% market share, Websense which has 25% market share, Trend Micro and Serve Control (Chau, 2003).

Market Trends

According to the report, on corporate side, the demand for network-based anti-virus would likely to increase than the demand for desktop-based anti-virus products since mostly viruses attacks are usually via internet (Chau, 2003).

On the other hand, in the consumer side, consumer market would likely to fade away since consumers are downloading free anti-virus from the Internet. It is expected that ISP’s will increasingly provide AV protection as a value-added service to the users (Chau, 2003).

In the firewall software, it has been expected that the demand for hardware-based appliance products would likely to increase for small and medium-sized companies. (Chau, 2003)

For Intrusion detection and vulnerability assessment, it is predicted that “it will become very popular as enterprises will shift to a balance between internal and external threats. In addition, the distinction between host-based and network-based IDS is becoming blurry with the creation of IDS consoles that receive data from both the network sensors and host agents. Integrated solutions will become the trend.” (Chau, 2003)

Market Driver

There are several market drivers of security management market. Chau (2003) identified some of these market drivers. In his report, he enumerated three of these market drivers which includes the Internet growth, telecommuting trend, and government generated awareness of Internet security.

Internet Growth

In Hong Kong, the Internet has become the prevalent communication means between business transaction and even between employees with the increasing trend of globalization. According to Hong Kong Government survey in 2001, 1.25 million households or 61% of all households in Hong Kong has PCs of which 80% are connected to the Internet compared to 50% households with PCs in 2000 of which only 36% are connected to the Internet in 2000 (Chau, 2003). Generally, consumers are making use of the internet to send emails, surf the web, carry out research, conduct on line banking transactions, and make low-value purchases. The survey estimated that around 6% of all persons over 14 had used one or more types of online purchasing services for personal matters in the 12 months before the survey (Chau, 2003).

On the other hand, on the business side, more than one third of businesses in Hong Kong have internet connections. “In 2001, about 12% of businesses had delivered their goods, services or information through electronic means which is 4% higher than that in 2000. The estimated amount of business receipts received from selling goods, services or information through electronic means in 2000 was USD 1 billion. Increased connectivity to the internet creates higher chances of hacker attacks, especially if the users have a constant live connection, such as through a DSL line.” (Chau, 2003)

“According to the Hong Kong Commercial Crimes Bureau, reports of computer-related offenses increased from 235 incidents in 2001 to 210 in the first nine months in 2002. Computer attacks had affected 5,460 computers in the past 12 months. Financial loss caused by computer-related crimes rose from USD 195,000 in 2001 to USD 236,000 in 2002. The Computer Crime Section of the Hong Kong Commercial Crimes Bureau believes that only 0.3% of the victims reported hacking incidents, fearing that doing so would damage their reputation. Facing increasing internal and external hacking threats, companies are seeking security tools to protect their network and to maintain public confidence.” (Chau, 2003)

Telecommuting Trend

Another major driver of security products, according to Chau (2003), is the increasing decentralization of the work force, such as mobile sales teams in the insurance industry who need to access corporate networks via PDA’s. There is an increasing trend of businesses and organizations which benefit from employees’ ability to dial into corporate networks via the internet, however, this often creates information security risks within the organization, resulting in increased dependence on, and greater deployment of, security products (Chau, 2003).

Government-generated awareness of internet security

Another major driver of security products is the government awareness on the importance of Internet security. With this awareness, government organizations are formed. Like for example the SAR Government. The SAR Government is committed to providing a safe and secure environment to foster the development of e-commerce in Hong Kong in which has built a public key infrastructure (PKI) through the establishment of a public certification authority and a voluntary CA recognition scheme in Hong Kong (Chau, 2003).

“Currently, there are four recognized certification authorities operating in Hong Kong which includes JETCO, Digi-Sign Certification Ltd., HiTRUST.Com and the Hong Kong Postmaster General. In addition to the establishment of the PKI systems, the Hong Kong Government has also engaged substantial resources to educate the public regarding the importance of information security. For instance, the Crime Prevention Unit of the Technology Crime Division of the Hong Kong Police is responsible for providing advice on all aspects of computer security. It also produces educational materials on raising computer security awareness and makes presentations on technology crime prevention topics.” (Chau, 2003)

In addition to the market drivers in which Chau has enumerated, there are still other market drivers of security management market. Braunberg (2004) identified two major groups of market drivers which are the near-tern market drivers and long-term market drivers. Under the near-term market drivers are manage or prevent, perimeter management, vulnerability assessment, embracing standards and the brains of the operation. Long-term market drivers include complexity and cost, device and security integration, knowledge database resources, lack of trust, on demand of computing and social engineering.

Near-Term Market Drivers

  1. Manage or Prevent. In the analysis of Braunberg (2004), the chief driver of event management solutions is the continuing and hugely annoying number of false positives pouring out of intrusion detection systems. According to him, a counter driver to growth in the managed security segment is the emergence of intrusion prevention systems, particularly in-line solutions that can perform real-time data blocking (Braunberg, 2004). The adoption of intrusion prevention system could inhibit spending on event management systems and security management vendors should consider these products competitive to their own (Braunberg, 2004)
  2. Perimeter Management. Security management products has evolve due tot to the demand of securing the perimeter. According to Braunberg (2004), security management solutions are evolving to integrate data from a host of perimeter products in which event management systems often evolved along separate lines with products for firewall, antivirus, and IDS.
  3. Vulnerability Assessments. According to Braunberg (2004), one of the near- term drivers for which end-users are of concern is understanding what the security risks are. Generally, clients are looking to leverage vulnerability assessments to help prioritize emerging threats. Increasingly vulnerability data is being leveraged in event management systems (Braunberg, 2004).
  4. Embracing Standards. According to Braunberg (2004), the industry is a long way from embracing standards for sharing event information but some progress has been made over the last year. The Internet Engineering Task Force’s Incident Object Description and Exchange Format (IODEF) draft specification is gaining some traction and its adoption would be a significant step forward for the market (Braunberg, 2004)
  5. The Brains of this Operation. According to Braunberg’s analysis (2004), the infatuation with IPS will be short-lived unless significant improvements can be made in reducing false positives in events however security management products will increasingly play a major role in providing the analytic smarts behind IPS solutions.

Long-Term Market Drivers:

  1. Complexity and Cost. With the increasingly complexity in the web-based business models, the more tangled is the security solutions for the end-users. According to Braunberg (2004), businesses building online strategies from scratch can be overwhelmed by the initial investment of security solutions, while those trying to adapt existing solutions to evolving security concerns are besieged by maintenance costs.
  2. Device and Security Integration. According to Braunberg (2004), equipment makers are paying much closer attention to imbedded security functionality in devices and are actively attempting to integrate security as a value-added service in order to change the thinking of the end users of security products as an “add-on” or an extraneous component of infrastructure. In addition, vendors are looking to unite service providers with standards programs that simplify client understanding and reduce the complexity of product buying (Braunberg, 2004).
  3. Knowledge Database Resources. Another market driver for security products is to actively secure the knowledge database from attack patterns and other descriptions of the enemies. The security products vendors should reinvent a faster response to the known threats. According to Braunberg (2004), multi-product vendors particularly will look to evolve from real-time monitoring to broader real-time management.
  4. Lack of Trust: According to Braunberg (2004), end users, whether they are corporate users putting a business plan on a server or a consumer buying a CD, have ingrained habits that they are not necessarily willing to give up. For example, no matter how good an online bank’s security system is, a consumer will have to be convinced that its services are not only as good as a brick and mortar bank’s services, but better (Braunberg, 2004).
  5. On demand Computing: According to Braunberg (2004), the availability of ubiquitous computing resources on demand will further drive the need for sophisticated, highly flexible security management solutions that combine both identity management and event management. According to him, the demand for more esoteric offerings such as GRID computing is the major long-term driver for security management solutions (Braunberg, 2004).
  6. Social Engineering. According to Braunberg (2004), clients are still facing risks in security that employees represent just through the human desire to be helpful, and hackers exploit this through “social engineering.” According to him, a component of managed security will need elements of employee training to build awareness of outside threats (Braunberg, 2004).

According to the analysis of Braunberg (2004), the security segment will continually be strong in which the diversity of interest ranges from an array of different types of companies which indicates a leverage of controlling security function.

In addition, since end users demand has also evolve in which they demand for more in-depth defensive strategies ad best of breed approaches to purchasing decisions, security solution in turn has become more complex.

Case Study: Trend Micro Enterprise

History

In 1988, Trend Micro Incorporated was founded by Steve Chang and his wife in California. Trend Micro Incorporated is a global leader in network antivirus and Internet content security software and services. The company led the migration of virus protection from the desktop to the network server and the Internet gateway—gaining a reputation for vision and technological innovation along the way. Trend Micro focuses on outbreak prevention and on providing customers with a comprehensive approach to managing the outbreak lifecycle and the impact of network worms and virus threats to productivity and information, through initiatives such as Trend Micro Enterprise Protection Strategy. Trend Micro ha grown into a transnational organization with more than 2,500 employees representing more than 30 countries around the globe.

Many of the leading high-tech and security industry analysts have tracked Trend Micro’s growth and performance for the last several years, hailing the company as “visionary”, citing its leadership and innovation in the security industry.

According to Brian Burke, IDC Research Manager, “Trend Micro has consistently demonstrated a strong position in the Secure Content Management market. To remain successful Trend Micro has adapted quickly to market challenges and the evolution of security threats such as spyware, phishing and spam, in which financial gain has become the number one driving force. Given Trend Micro’s track record and its strong upward momentum, we expect the company to continue delivering innovative solutions that provide customers with timely protection against unpredictable threats.”

Trend Micro has earned a reputation for turning great ideas into cutting-edge technology. In recognition of the antivirus company’s strategy and vision, the analyst firm Gartner has hailed Trend Micro as a visionary malicious code management supplier for four consecutive years. Citing its flexible and efficient transnational management model, BusinessWeek acknowledged Trend Micro as one of”a new breed of high-tech companies that are defying conventional wisdom.” According to IDC, Trend Micro has held the top global market share in internet gateway antivirus for six consecutive years.

A history of innovation

In 1995 Trend Micro became an industry pioneer in the migration of virus protection from the desktop to the server level, with the launch of Trend Micro™ ServerProtec. In 1997 it launched the industry’s first virus protection for the Internet gateway with InterScan VirusWall. Since then, it has demonstrated a history of innovation in server-based antivirus products that has contributed to the leadership position it holds today in this market (according to the recent IDC report “Worldwide Antivirus 2004-2008 Forecast and 2003 Competitive Vendor Shares.”

Trend Micro continues to shift the paradigms of antivirus security with cutting-edge products, services and strategies like Trend Micro Network VirusWall, Outbreak Prevention Services, and it’s Enterprise Protection Strategy. Trend Micro is committed to following its path of innovation to help companies manage today’s ever-increasingly complex, fast-spreading malware threats.

SWOT Analysis

Strengths

  • Business and security knowledge
  • Trend Micro has been a pioneer and innovator in the antivirus software market since 1988, anticipating trends and developing products and services to protect information as new computing standards have been adopted around the world.
  • Service and support excellence, that is, Trend Micro products and services are backed by TrendLabs a global network of antivirus research and support centers. TrendLabs monitors potential security threats worldwide and develops the means to help customers prevent the spread of outbreaks, minimize the impact of new threats, and restore their networks.
  • Flexible workforce through contingent workers for seasonal/cyclical projects
  • Loyal, hardworking, and diverse workforce who, in addition to good compensation, have an opportunity to do well
  • Multinational corporation operating through regional subsidiaries to minimize cultural differences
  • Low employee turnover
  • Relatively rapid product development processes that allow for timely updating and release of new products
  • Revenues and profits rising at 30% a year with merger/acquisition or investment in 92 companies over past five years
  • Software products have high name recognition, broad-based corporate and consumer acceptance and numerous powerful features that are in use worldwide, thereby promoting standardization and competitive advantage through their ease of integration and cost-effectiveness
  • Top rating from Fortune for best company to work at and most admired company
  • World’s largest software company with global name recognition and strong reputation for innovative products

Weaknesses

  • Perceived by many as a cut-throat competitor that uses its dominant market position to marginalize competition by stealing/destroying the competition’s products, stifling product innovation, and decreasing the availability of competitor products
  • Products have a single application focus and do not work well with or on-top of other products
  • Reputation has suffered because of entanglement in antitrust and “permatemps” Vizcaino litigation
  • Misperceptions of security’s value or purpose

Opportunities

  • Cheaper global telecommunication costs open new markets as people connect to the Internet in which in turn increases the need for security products
  • Mobile phone applications and exploitation of personal digital assistants represent a growth industry so that strategic alliances could provide the company with opportunity in a market where it currently has little or no significant presence
  • Business Continuity
  • Reduced Costs
  • Potential Revenue Opportunities
  • Trend Micro holds the top market share for both worldwide Internet gateway and email-server based antivirus sales.

Threats

  • Currency exchange rates affect demand for application/operation software and hardware, and fluctuating currencies can negatively impact revenues in the global marketplace
  • Recession or economic slowdown in the global market impacts personal computer equipment sales and their need for an operating systems which in turn would slowdown the need for security systems
  • Software piracy of commercial and consumer applications software on a global scale threatens revenue streams
  • Technology life cycle is shorter and shorter
  • Inconsistency across the enterprise
  • Loss of sponsorship or visibility

Current Strategy

The continuous success of Trend Micro is guided by its strategies. Innovation was always been the strategy of a technological company however in Trend Micro, innovation was not the only strategy implemented. There are many essentials that are to be considered. The current strategy of Trend Micro are the following.

“Focus On the Essentials and Sacrifice the Rest”

It is known that focus is important and essential for the success of any business. According to Steve Chang, “strategy is about focusing on essential and sacrificing the rest.” (Chang, 2002) in addition, according to Peter Firstbrook, program director, security & risk strategies, META Group, Trend Micro has done just that, having an amazing laser-like focus on their business. And the authors of a Harvard Business School case study commented: “Although very entrepreneurial, Steve Chang held fast to a single strategic focus for over a decade. Rather than attempt to provide all security products to all customers, Trend Micro concentrated on developing ‘best-of-breed’ antivirus solutions.” (Pain and Bettcher, 2003)

Trend micro’s consistent and persistent focus allowed the company to build their strengths and consistently leading the market.

Innovation Isn’t Just About Your Software Products

Trend Micro has many product firsts under its belt: the first antivirus product for a server in 1993; the first Internet gateway protection antivirus software in 1996; the first e-mail anti-virus software product in 1998; the first Internet content security service in 1999.

However, for the Trend Micro innovation applies to more than just the products. It is a pervasive notion that applies to other areas of your business as well. Innovation should be seen new type of global organization and in a new service offering.

According to Steve Hamm in a 2003 Business Week article, “Borders are So 20th Century, ” Trend Micro is an example of a new form of global organization, transnational organization in which aimed to transcend nationality altogether.

Hamm quotes C. K. Prahalad, a professor at the University of Michigan Business School, who says “’There’s a fundamental rethinking about what is a multinational company…’Does it have a home country? What does headquarters mean? Can you fragment your corporate functions globally?’” (Hamm, 2003)

According to Hamm (2003) Trend micro was one of the first responder to viruses which can deliver services in 30 minutes before the market leader Symantec. He commented that “Trend Micro is able to respond so quickly because it’s not organized like most companies.” (Hamm, 2003)

The strategy of Trend Micro is to spread its top executives, engineers, and support staff around the world. “The main virus response center is in the Philippines,

Impact of the Technological Revolution

1 INTRODUCTION

The role of technological revolution has touched every aspect of people’s lives from shopping to banking. The changes have great impact on services quality and banking activities has enabled the banks to compete in the world markets (Siam 1999-2004, 2006).

The banking industry worldwide is witnessing a growing technology driven self-service by way of electronic banking (e-banking) through interacting with customers as a way of increasing productivity.

The use of Information Communication Technology (ICT) helps the banks in making strategic decisions by enabling better alignment of business to build better relationship with customers. ICT has enabled banks to provide the following services:

  • Automated Teller Machines (ATM) that have been installed at convenient places for customers to access their accounts anytime.
  • Electronic Data Interchange (EDI) that allows different organisations to exchange transactional, financial and business information between their computer systems.
  • Plastic Cards designed to pay for goods and services without necessarily using cash and also to withdraw cash from ATM’s located worldwide.
  • Electronic Clearing Service (ECS) is a facility that allows fund transfer from one bank to another electronically. It can be used for bulk or repetitive transfers either by institutions for dividend distribution, salary, etc. and pension, or by individuals for regular payments to utility, loan repayment, etc.
  • Internet Banking as a channel of Electronic Banking (E-banking) allows the customer to do transactions through the bank’s web page in a flexible mode, i.e. at anytime and anywhere.

The flexibility of E-banking is a major benefit to customers because they are able to access the banking services at the comfort of their homes or offices and no more queuing at banks. For the banking sector, E-banking is a big investment on capital and resource though the initial acquisition of relevant infrastructure, standardisation and security are expensive, especially for small banks in developing countries, but not a big problem for big banks in developed countries. These also have to follow the standard legislative and regulatory issues set within a country to protect customers’ rights, especially the concerning data protection.

1.1 Background of Study

The role of internet has become unavoidable to business and society. Businesses and governments worldwide are always working on how to better utilise the internet in order to increase their penetration into the global market (Khan, Mahapatra & Sreekumar (2009). Banking sector has seen the use of Information Technology (IT) a better way of reducing the traditional way of investing and moving along the modern technological changes in order to meet up with the global market. The growing changes in technology bring economical and social consequences on our daily life and these changes brought about the Internet. The Internet provides services like, World Wide Web (WWW), Automated Teller Machines (ATM), Electronic Data Interchange (EDI) and Electronic Funds Transfer (EFT) which are the core business services of E-banking. The banking sector has embarked on internet banking systems to enable their customers to access their accounts globally and in a flexible mode through their websites. This move to internet banking has seen banks reducing long queues as some customers can serve themselves either through the ATMs or through the website, depending on the type of service they want to perform. Though the banking has embarked on internet banking systems, it has not totally abolished the traditional banking activities. This is to allow those customers who need face to face help to still come to banks to get help on whatever activities or services they need either because they do not trust the web or because they are unable to do not know the technology used and fear to make mistakes.

Internet is used world wide for different things, some good and some malicious. This then brings in the issue of trust on the part of both the web site owners and users. Some users still prefer to go and queue in the banks because either they do not trust the web services or are unfamiliar with the systems and therefore feel uncomfortable to use e-banking. Trust should be built in order to encourage more customers to use the web site for their banking service needs. Trust can be categorised into tangible and intangible trust. Tangible trust is an implied trust that can be addressed by the use of digital certificates and SSL protocols and service level granularity. On the other hand intangible trust is something that can be formed or reinforced and is subjective, emotional and has a rational component. Trust can build or destroy the organisation’s reputation.

1.2 Motivation of Study

There has been considerable work carried out in the field of e-banking/e-commerce trust (Smith & French 2005); (Khalil 2007), however, there was a gap in their knowledge of cultural gap, especially in developing countries like Botswana. The motivation on this research is as follows:

  • The need to show the importance of localisation of e-banking site as e-banking is a new phenomenon in Botswana.
  • To make further studies on cultures of two ethnic groups within the same country as there have been very little research on this area. This is not the case with developed countries as the studies show that there has been localisation of e-banking to suit their target markets (Singer, Baradwaj and Avery 2007).

1.3 Aim

The main aim of this research is to examine how the Tswana and Kalanga ethnic groups of Botswana culturally perceive trust on a B2C e-banking website and to design an e-banking website for each ethnic group.

1.4 Objectives

The following objectives will be achieved through this study:

  • A research will be carried out on how cultural background influences the trust and use of e-banking services.
  • To relate the findings of the research in the design process of a web sites that suits the culture for Tswana and Kalanga ethnic groups.

1.5 Research Questions

The research study aims to test the following key questions:

  1. What is the impact of culture on the contents of e-banking site?
  2. How does culture affect online trust in e-banking?
  3. Is it necessary to consider culture, trust and usability in designing e-banking website?
  4. How does Tswana and Kalanga cultural differences affect e-banking?

1.6 Methodology

The research employs both primary and secondary data. Primary data will be collected through a structured survey which will be an online. The online survey will be through email where a link will be sent to the respondents in Botswana and UK. This method is chosen because it is flexible in that the respondents answer the questionnaires at their own free time and in a flexible mode of their computers. The other important thing is that it is cheaper to administer and responses are received more quickly and also that if there are any errors in the questionnaire it is easier to correct the errors.

2 E-SERVICES AND CULTURE – WHAT IS THE RELATIONSHIP

2.1 E-Commerce and E-Banking

The development of Information Technology and the advent of internet have enabled traditional business activities to change into Electronic Commerce (E-commerce). E-commerce is a process that allows businesses and customers (B2C) to exchange goods and services electronically anytime anywhere, and it includes banking, stocks and bonds, retail shopping, movie rentals, etc. E-Commerce has opened a global market where businesses can reach their respective customers quickly and cost effectively (Li et al 2009). For trading to be successful in this virtual world, trust must be considered vital not forgetting culture. E-commerce includes inter-organisational marketing process in which the following relationships are observed: B2B (business to business), B2C (business to consumer), and C2C (consumer to consumer).

E-banking sometimes called electronic banking or internet banking is a system that allows people to conduct transactions and manage their accounts without necessarily going to the ‘brick and mortar’ banks. For customers using internet banking to access their account, they need to have personal accounts at the respective banks’ websites. For e-banking to be effective, banks should invest on IT infrastructure like Hardware, Software, Networking which include connection to the internet.

Automated Teller Machines (ATM) and personal computers have reduced the cost in favour of banks on paper work and labour force since customers use self services offered by banks. However, it should be noted that there are still some people who would want to be served by bank officials either because he/she does not know how to operate the banks system, does not trust it or want face-to-face interaction with bank officials.

2.1.1 Benefits of E-Banking

Electronic banking or online banking is the most popular means of e-commerce for millions people worldwide. Most banking products and services are now offered over the Internet. Banks have invested in robust information technology practices and secure-transaction technologies that have made electronic banking trustworthy. This has also created some benefits on e-banking as follows:

  • Convenience and flexibility as the customer is able to pay bills, shop and transfer money from anywhere at any time suitable to the customer as long as the customer has access to a personal computer and internet connectivity. There is no strictness of business hours as the services are available 24 hours every day unlike in the traditional brick and mortar where a customer has to observe working hours.
  • Customers are able to manage their customers as they are able to access their accounts and therefore can cross check their accounts anytime.
  • To the customer the only cost associated with e-banking is the cost of the time spent online which is usually charged by the internet provider.
  • There is also time and money saving as customers do not have to travel distances to their respective banks unless on crucial issues.

2.1.2 Limitations of E-Banking

As well as electronic banking have advantages there are some limitations too. Below are some limitations on electronic banking.

  • Some bank websites have too much information that confuses the customers and the customer may feel it is a waste of time as he/she does not get the information that he/she wants and may never bother to visit the website again.
  • The financial needs of the customer may not be quickly be predicted and therefore will take some time to be solved, which is an inconvenience to the customer.
  • Hacking and identity theft are on the rise this calls for a certain amount of trust placed on the banks by electronic banking customers. The system should be able to stand against hacking.
  • There is no Face-to-face interaction in electronic banking and some customers still need this type of service that are observed in a traditional bank to quickly solve or answer customers’ queries.
  • In case of internet failure the customer is unable to withdraw money from his/her account and unable to even use the ATMs or credit/debit cards.
  • Some banks charge for ATM usage by non-customers, therefore if a customer stays where there is not ATM for his/her bank, then he/she will be charged to using the facility of another bank.

2.1.3 Security and Trust

Security issues are a major concern for everybody using internet whether for banking purposes or not. There is an increase of security risks in the banking sector as their systems are exposed to risky environments. Confidentiality, integrity, privacy and availability are the core areas of security that banks and financial institutions must address (Jide Awe 2006).

This calls for banks and financial services authority to plan ahead in monitoring and managing the security threats. The security threats are classified in three categories; breaches with serious threats (e.g. fraud), breaches caused by casual hackers (e.g. web sites defacement or services denial (e.g. causes of web sites to crash) and flaws in systems design (e.g. genuine users able to see or use another user’s accounts). These threats cause serious financial, legal and reputational implications to the banks affected. Banks and financial institutions need to put in place security measures to respond to these threats. The security measures need constant update in order for them to cope with the ever increasing and advanced threats. The banks should also have sufficient staff who have security expertise in order to keep on checking and updating the banks’ systems. These threats bring in customers’ lack of trust on the electronic banking that is why some customers prefer to queue at banks to get services that they would have otherwise done through the internet.

Trust should be built in order to encourage more customers to use the web site for their banking service needs. McKnight, Cummings and Chervany (1998) define trust as “an individual’s beliefs about the extent to which a target is likely to behave in a way that is benevolent, competent, honest, or predictable in a situation”. Trust can be categorised into tangible and intangible trust. Tangible trust is an implied trust that can be addressed by the use of digital certificates and SSL protocols and service level granularity. On the other hand intangible trust is something that can be formed or reinforced and is subjective, emotional and has a rational component. Trust can build or destroy the organisation’s reputation.

Trust is very important and should be the critical area for each bank to consider because if customers do not trust a bank then the bank will be out of business. Trust on e-banking is crucial because it can make the banks to lose money and popularity if hackers are able to access customers’ accounts. It can be ensured by putting stringent measures on the banks systems and including in its website the symbols/signs and text that will make the customer aware of the security of the website. Some researchers believe that in electronic cyber consumer trust is more important than in traditional transactions (Kim, Ferrin and Rao 2007). There are signs and symbols that are used in the website that indicate to the customer that the site is trustworthy. These trustworthy signs may be explicit and some implicit (French, Liu Springett 2007).

2.1.4 Cultural Models

The world is comprised of people with different cultural backgrounds which justifies their behavioural variation. This variation usually shows the different cultures and values of these people. Culture is something that identifies and differentiates one person from another and it is something that is not inherited or from genes but it is learned. The environment in which a person grows usually determines the person’s culture because he/she learns the language, the norms and values of the people with which he/she lives. Hofstede (1991) defines culture as “the collection of human mind that distinguish the members of one human group or category of people from those of others”. The manifestation of cultural differences is formed through a combination of four characteristics: symbols, heroes, rituals, and values.

Rituals are sacred things that must be carried out within a cultural environment. Values are cultural things that are mentally stored as one grows up within the cultural environment. Symbols are things like language, pictures/objects and gestures that depicts meaning understood within the same cultural group. Heroes are people respected and considered to be role models within a cultural environment, however, this changes as the child graduate into adulthood.

Cultural differences across the world vary according to ethnic groups and also across geographic boundaries.

2.1.5 Hofstede’s Cultural Dimensions

Hofstede (1984, 1991) identified the following five cultural dimensions which could be used to compare and measure cultural differences.

2.1.5.1 Power Distance Index

Power is not distributed equally among the society. This is indicated by some people having more power than others, eg. some people are born kings, chiefs already having that status even at the very early age as toddlers. These people will be respected from that very early age even in way they are addressed.

However, the power distance can be measured differently depending on the difference in society. There are those that are termed as large power distance culture where the subordinates do as told and the superior gives instructions and is the only one who decides what is good for the society or organisation. On the other hand there are those that are termed low power distance where there is consultation between the superior and the subordinates. In this category the superior respects the subordinates and entrust them with important assignments believing that they will be successfully completed.

2.1.5.2 Individualism versus Collectivism

In individualism can be classified as nuclear family where each individual act independently, making his/her own choices and decisions. As a member of the nuclear family, the individual has to take care of himself/herself and his/her immediate families. On the other hand collectivism can be classified as patrilineal or matrilineal where people, after being integrated into the society at birth, are looked after by extended families.

2.1.5.3 Masculine versus Femininity

Division of roles depend on gender, Men must provide for their families and female must take care of the children and the whole family. The assertiveness of men creates dominance over female on economic life within the family irrespective of whether it is an extended or nuclear family. However, in developed countries there are some variations on gender role pattern that enable females to enrol in courses that were initially designed for men and therefore do jobs that were done by men. In some underdeveloped or developing countries where the gender role pattern still exists, women are barred from doing jobs that are considered to be designed for men and women are also barred from enrolling on courses that are designed for men. This gender role pattern is still strictly followed in some underdeveloped countries where men are said to be head of families and thus gives the men all authority over everything that goes on in the family. Woman in such families do not have any say, they are told what to do, how and when by their husbands and they are not supposed to question the instructions from men.

2.1.5.4 Uncertainty Avoidance Index (UAI)

Most people fear uncertain situations because they cannot predict what might happen in the near future. To minimise this, organisations or societies engage strict laws and rules, safety and security measures, and religious and cultural beliefs to protect their organisations or societies. However, avoidance of uncertainty varies according to cultural differences. In high power distance culture, the boss is the only one to make decisions and the subordinates must strictly follow the boss’s instructions. Subordinates expect the supervisors to tell them what to do because they regard each other as unequal. In organisations, this is also indicated by the wide salary gap whereas in societies it is indicated by the prestige given to superiors (e.g. chiefs) by their subordinates (e.g. tribes).

In low power distance culture there is respect by supervisors over their subordinates. The supervisors entrust subordinates with important assignments trusting that the work will be done efficiently and if there is something wrong the supervisor will not put the blame on the subordinates, but rather takes it upon himself. The society believes people are equal irrespective of their education, religion or wealth. There is more democracy as subordinates’ views are sought and taken into consideration when making decisions.

2.1.5.5 Long-Term Orientation versus Short-Term Orientation

The long-term orientation versus short-term orientation is a model which came a after Hofstede was convinced by Michael Bond who called this dimension the Confucian dynamism. Values of long-term orientation are more oriented towards future e.g. perseverance and thrift while on the short-term the values are more oriented towards past and present and therefore more static e.g. respect for tradition and reciprocation of greetings, favours and gifts, personal steadiness and stability.

2.1.6 Trompenaars, Hall and Other Cultural Models

There are several cultural models most of which overlap into Hofstede’s models (Kluckhohn; Trompenaars 2000). Trompenaars developed the below models:

2.1.6.1 Universalism versus Particularism

This can be viewed as authority versus consultation. In authority the one who has authority gives instructions and makes decision without the involvement of others whereas in consultation other people’s views are taken into consideration when making decisions.

2.1.6.2 Individualism versus Communicationism

Where there is balance between individuals and groups needs.

2.1.6.3 Specific versus Diffuse Relationships

Here the business is done on an abstract relationship (contract) or on good personal relationship in order to bring in liking and trust.

2.1.6.4 Neutral versus Affective Communication Styles

In this dimension people hide and hold on to their emotions or they show them up in which case they expect some emotional response.

2.1.6.5 Time Orientation

Monochromic culture focuses more on performing the task promptly meeting the original plan and prefers to do one task at a time. Polychronic culture tends to e multi-tasking, doing different things at the same time, and emphasis is more on relationship than on tasks.

2.1.7 Hall’s Cultural Models

Hall (1976, 1983) developed the three cultural dimensions in which he describe how people behave. Following are his Cultural Models:

2.1.7.1 Context

High Context – People are helped by many contextual elements to understand the rules and it is a problem for those who do not understand unwritten rules.

Low Context – More explanation on rules is done as things are not taken for granted and therefore there is less chance of misunderstanding.

2.1.7.2 Time

Monochronic Time is where one thing is done at a time and the concern is achieving the task on schedule.

Polychronic Time is where several things are done at the same time (multi-tasking) and here the concern is on relationship and not schedule.

2.1.7.3 Space

High Territorial – Some people have greater concern for ownership and try to mark their territorial boundaries whether at home, parking space and even in shared offices.

Low Territorial – People here are not much concerned with ownership of space and for them it less important

Hofstede, Trompenaars and Hall did extensive research that enabled them to conduct rankings on countries’ cultural differences. Hofstede conducted his research on 50 countries whilst Trompenaars conducted his on between 19 and 52 countries though with fewer rankings. Although it does not clearly show whether Hall did any rankings but he did a comparison of cultural dimensions among the French, Americans and Germans.

2.1.8 Tswana Culture

Households in the Tswana polities usually take the form of three residential sites: one household in the village, one at agricultural holdings outside the village but not very far from the village (where ploughing takes place) and the last a cattlepost (with kraals for keeping livestock owned by the family).

  • • Power Distance: Tswana tribes greatly respect their elders which is shown especially when the younger ones greet the elders. In Botswana greetings are used to judge somebody’s behaviour and greetings are conducted in a certain manner. When greeting an elder, a younger has to stop a bit to show respect and if the younger person is a male wearing a hat, he has to take it off to show respect to the elder. A man also has to take off his hat when getting inside the house as a custom unless the man is a widower. Each Tswana tribe or ethnic group has a Chief (Kgosi) who is helped by paternal uncles and Headmen. The paternal uncles are by virtue of close relationship to the Chief advisors as they are considered to have the royal blood. Kgosi’s traditional court is called Kgotla, and it is the main customary court within the village where disputes or misunderstandings that could not be solved by Headmen are solved. The Chief’s Kgotla also acts as the Traditional Court of Appeal within the village, where people who are not satisfied with the Headmen’s rulings can appeal. Chieftainship is inherited, so for a person to be a chief he/she has to be born from the royal family and not somebody chosen. Most of the Tswana people are Christians as Christianity was brought in Botswana as early as 1845 by a Scotsman named Dr. David Livingstone. The first Christian to be baptized by Dr Livingstone was Chief Sechele of the Bakwena and this was a good sign towards improvement in peoples’ way of living. For a chief to be a Christian it was easier to convince other chiefs and the people to become Christians. Christianity also contributed a lot to Tswana culture as it reduced the bureaucratic principles where only one person would make decisions for the whole family or tribe and nowadays consultation is the norm.
  • • Individualism versus Collectivism: Collectivism is the norm with the Tswana Culture where somebody has to take care of his/her family and also the extended family like uncles, grandparents, aunts, nephews and nieces. In the olden days class differentiation was very low and mostly invisible because traditionally those who had more cattle would help those who had none by distributing the cattle to those households for management. This helped the families because they would use the cattle to plough with and user their milk to feed their own family. This management of cattle also resulted in people being paid by one cow every six months or every year depending on the agreement between the owner of the cattle and the person taking care of the cattle. However, some people do not want to take the responsibilities of extended families and that is why there are organisations like SOS and other orphanage organisations to take care of orphans and also the government is giving out food rations on monthly basis to orphans, elderly people and families considered to be very poor.
  • • Masculinity versus femininity: In the traditional Tswana setup masculinity is the norm, roles are distinguished according to gender, and this is clearly visible in traditional ceremonies where men are the only ones to sit on chairs and women sit on mats and also that in meetings men are to speak first and women are to confirm what the men have said. Men were considered heads of the families and therefore their decisions were final and unquestionable. But since the Beijing Declaration and Platform for Action at the Fourth World Conference on Women in 1995 (United Nations World-Wide Web page 1995) and the government of Botswana’s emphasis on equality, some jobs/tasks which were considered to be for males only are now considered unisex. At present there are some women chiefs in some Tswana tribes which traditionally the chieftainship was considered to be for men, even if the chief would die only having daughters that meant chieftainship would be given to one of the partenal uncles or his elder son. By this the chief’s family would have lost the chieftainship inheritance.
  • • Uncertainty avoidance: Tswana ethnic group used to believe in ancestors and most of them liked to consult traditional healers for different illnesses and protection against evil spirits. Since the introduction of Christian religion through Dr David Livingstone in 1843, most people no longer believe in traditional healing. The staple food for Tswana is sorghum or corn meal porridge which is made thinner for breakfast and thicker for lunch and supper eaten with some relish which may be chicken, meat from goat, sheep or cattle (sometimes pounded), caterpillar known as ‘phane’ and various wild game and vegetables. But these cultures of food have now shifted a bit but are more common in ceremonial occasions like weddings and funerals and also westernised foods are prepared like coleslaw, pumpkin, squash, rice, etc.
  • • Long-term versus Short: Tswana culture used to allow children to go to school only to learn how to read and write. Most female teenagers were taken out of school to go and be married to elderly men as an arranged marriage between the parents without the agreement of the female teenager, but now people find their own partners and marry when they feel they are ready and not pushed.

2.1.9 Kalanga Culture

Kalanga tribe is found in the north eastern part of Botswana and some in Zimbabwe, only separated by the border. The Kalanga tribe in Botswana, who are still withholding their culture, are mostly found in different villages within the north east side of Botswana. The Kalanga Language was taught in primary schools until 1972, six years after Botswana gained its independence from the British, and now the Kalanga tribe believe that since the discontinued teaching of Kalanga Language in primary schools their culture has been jeopardised. The staple food for Kalanga is sorghum or corn meal porridge which is always made thick and taken with relish. The relish is comprised of meat (sometimes pounded), caterpillar known as ‘phane’ and various wild game and vegetables. But these cultures of food have now shifted a bit but are more common in ceremonial occasions like weddings and funerals and also westernised foods are prepared like coleslaw, pumpkin, rice, squash etc.

  • Power Distance: The Kalanga, like the Tswana, have chiefs who look after the tribe. Their ancestral belief is very high even if they still do practise Christianity. This is shown in their annual Dombosaha ceremonies and also in their prayers for rain. Their prayers are conducted at the hill call Domboshaba, where they believe their ancestral god ‘Ngwale’ is. The word Domboshaba means Red Hill – ‘Dombo’ means hill and ‘shaba’ means red. Bakalaka treat Domboshaba like the Islam treat Mecca, this means Domboshaba is a holly place for Bakalaka. They believe the ancestors are always watching over the living and if the ancestors become upset they are able to send sickness to the living as a sign of displeasure. According to the Kalanga tribe the spirits displeasure is revealed through illnesses, droughts and other calamities and can be appeased only through worship to Ngwale.
  • Individualism versus Collectivism: The Kalanga tribes are still strictly using collectivism as they look after each other or their extended families. Individualism is avoided as their belief is “no man is an island’. They emphasise on community care which shows collectivism dimension.
  • Incident Handling on Cloud Computing

    Introduction

    Cloud Computing

    Cloud computing provides people the way to share distributed resources and services that belong to different organizations or sites.As cloud computing allocate the divided possessions by means of the systems in the released surroundings. That’s why it creates the safety issues for us to expand the cloud computing application.

    Cloud computing is explained by NIST as the representation for allow suitable, on demand arrangements for right to entry to a collective pool of settings the calculative

    Possessions. All these like networks, servers, storage, application and services is continuously planned and free with less supervisory activities or cloud supplier communication. Cloud computing is taken as a innovative calculating concept up to now. It permitted the use of calculating communication with more than one stage of thoughts. The spot requirement of these services is offered online at fewer prices. Reason is that the insinuation for the high elasticity and accessibility. Cloud computing is the main topic which will be getting the good manner of concentration recently.

    Cloud computing services gives advantages from financial systems of all range accomplished. With this the flexible utilization of possessions, occupation and others work competency.

    However, cloud computing is an emerging forming of distributed computing that is still in its infancy.

    The concept uses of its own all the levels of explanations and analysis. Most of the concepts has been written regarding cloud computing, its explanation. Its main aim is to search the major paradigm of the utilization and given that common classification for

    Concepts and significant details of the services.

    A public cloud is the major one which has the communication and other calculative possessions. This consists of making obtainable to the common people online. This is known by all the cloud servicer who is doing the marketing. It’s by giving explanation of the outsider industries. On the other hand of the range is the confidential cloud. The confidential cloud is the one in which the calculating surroundings is generated completely for the industry. This can handled by industry or by the third party. This can be hosted under the industries information centre which is within or outside of it. The private cloud provides the industry a good control on the communication and calculative sources as compared to public cloud.

    There is other operational models which lies between the private and public cloud. These are community cloud and hybrid cloud. The community cloud is mainly related to private cloud. On the other hand the communication and calculative sources will be mutual by various industries that are having a similar confidentiality and regulatory thoughts. Instead they are exclusively checking the one industry.

    The hybrid cloud is mainly the blend of two or more than two clouds i.e. (private, community, or public) this

    Become the uncommon bodies which are stringed to each other by harmonized or proprietary technology which allows interoperability. Same as the various operational models which impacts to the industrial range and organized surroundings. That’s why this model gives assistance to the cloud which impacts it.

    Three well-known and frequently-used service models are the following:

    Software-as-a-Service. Software-as-a-Service (SaaS) is an on demand software services in which user gets access to the required software thorough some intermediate client like browser using internet. Software platform and relevant files are stored centrally. It drastically reduces the total cost of software for the user as it does not require user to incur any infrastructure cost which include hardware installation cost, maintenance cost and operating cost. Subscribers of these services are only given limited control related to the desired software including any preference selection and administrative setting. They do not have any control over the underlying cloud infrastructure.

    Platform-as-a-Service. Platform-as-a-Service (PaaS) is an on demand platform delivery model. In this user is provided with the complete software platform which is used by the subscriber to develop and deploy software. It also result in considerable saving for the subscriber as he does not have to incur costs related to buying and managing of complicated hardware and software components required to support the software development platform. The special purpose development environment is tailored to the specific needs of the subscriber by the cloud service provider. Good enough controls are given to the subscriber to aid in smooth development of software.

    Infrastructure-as-a-Service. Infrastructure-as-a-Service (IaaS) is an on demand infrastructure delivery services. In this host of computing servers, softwares, and network equipments are provided. This infrastructure is used to establish platform to develop and execute software. Subscriber can cut down his cost to bare minimum by avoiding any purchase of hardware and software components. Subscribers is given quite a lot of flexibility to choose various infrastructural components as per the requirements. Cloud subscriber controls the maximum security features.

    Figure illustrates the differences in scope and control between the cloud subscriber and cloud provider.

    Given central diagram shows the five conceptual layers of a cloud environment which apply to public clouds and other deployments models

    The arrows at the left and right of the diagram denote the approximate range of the cloud provider’s and user’s scope and control over the cloud environment for each service model.

    Cloud subscriber’s extent of control over the system is determined by the level of support provided by the cloud provider. Higher the support by cloud provider lower is the scope and control of the subscriber. Physical elements of cloud environment are shown by two lower layers of the diagram. These physical elements are completely controlled by cloud provider irrespective of the service model.

    The facility layer which is the lowest layer comprises of Heating, ventilation, air conditioning (HVAC), power, communications, and other aspects of the physical plant whereas hardware layers comprises of network , storage and other physical computing infrastructure elements

    The logical elements of a cloud environment is denoted by other layers

    The virtualized infrastructure layer lead to software components, such as hypervisors, virtual machines, virtual data storage, and supporting middleware elements required to setup a capable infrastructure to establish efficient computing platform

    While virtual machine technology is commonly used at this layer, other means of providing the necessary software abstractions are not precluded. Similarly, the platform architecture layer entails compilers, libraries, utilities, and other software tools and development environments needed to implement applications. The application layer represents deployed software applications targeted towards end-user software clients or other programs, and made available via the cloud.

    Iaas ans Paas as services are very close and difference between them is quite vague. Basically these are distinguished by the kind of support environment, level of support and control allocation between cloud subscriber and cloud provider.

    Main thrust of cloud computing is not only limited to single organization but also extends as a vehicle for outsourcing various components as public cloud.

    been to provide a vehicle for outsourcing parts of that environment to an outside party as a public cloud.

    Through any outsource of information technology services, relates survived in relation to any connotation for system safety and isolation.

    The main issue centres on the risks associated with moving important applications or data from within the confines of the Industries calculating centre which is of different other company (i.e. a public cloud). That is easily available to the normal people

    Decreasing prise and increasing proficiency is the main concerns. These two are the chief inspirations for stepping towards the public cloud. On the other hand deceasing accountability for the safety should not depend on it. Finally the industry is responsible for all safety issues of the outsourced services. Observing and addressing the safety problems which go increase will be at the sight of industry. Some of the major issue like performances and accessibility. Because cloud computing brings with it new security challenges, it is essential for an organization to oversee and Administer in which manner the cloud servicer handles and prevent the computing environment and provides guarantee of safety.

    Incidents

    an event is any observable occurrence in a system or network. Events include a user connecting to a file, a server receiving a request for a Web page, a user sending electronic mail, and a firewall blocking a connection attempt. Unfavorable occasion are the one which has unhelpful results. For instance: crashes, network packet floods and unauthorized utilization. of system privileges, unauthorized access to sensitive data, and execution of malicious code that destroys data. A system safety occasion is actually a contravention or forthcoming danger of breach of system safety strategy, suitable utilization policies and modeled safety policies. The terminology for these incidents is helpful to the small business owner for understanding service and product offerings

    Denial of Service- An attacker directs hundreds of external compromised workstations to send as many ping requests as possible to a business network, swamping the system.

    Malicious Code- A worm is able to quickly infect several hundred workstations within an organization by taking advantage of a vulnerability that is present in many of the company’s unpatched computers.

    Unauthorized Access- An attacker runs a piece of “evil” software to gain access to a server’s password file. The attacker then obtains unauthorized administrator-level access to a system and the sensitive data it contains, either stealing the data for future use or blackmailing the firm for its return.

    Inappropriate Usage- An employee provides illegal copies of software to others through peer-to-peer file sharing services, accesses pornographic or hate-based websites or threatens another person through email.

    Incident Handling:

    Incident handling can be divided into six phases: preparation, identification, containment, eradication, recovery, and follow-up.

    Step 1: Preparation: In the heat of the moment, when an incident has been discovered, decision-making may be haphazard. Software-as-a-Service (SaaS) is an on demand software services in which user gets access to the required software thorough some intermediate client like browser using internet. Software platform and relevant files are stored centrally. It drastically reduces the total cost of software for the user as it does not require user to incur any infrastructure cost which include hardware installation cost, maintenance cost and operating cost. Subscribers of these services are only given limited control related to the desired software including any preference selection and administrative setting. They do not have any control over the underlying cloud infrastructure.

    Platform-as-a-Service.

    Platform-as-a-Service (PaaS) is an on demand platform delivery model. In this user is provided with the complete software platform which is used by the subscriber to develop and deploy software. It also result in considerable saving for the subscriber as he does not have to incur costs related to buying and managing of complicated hardware and software components required to support the software development platform. The special purpose development environment is tailored to the specific needs of the subscriber by the cloud service provider. Good enough controls are given to the subscriber to aid in smooth development of software.

    Infrastructure-as-a-Service.

    Infrastructure-as-a-Service (IaaS) is an on demand infrastructure delivery services. In this host of computing servers, softwares, and network equipments are provided. This infrastructure is used to establish platform to develop and execute software. Subscriber can cut down his cost to bare minimum by avoiding any purchase of hardware and software components. Subscribers is given quite a lot of flexibility to choose various infrastructural components as per the requirements. Cloud subscriber controls the maximum security features.

    Figure illustrates the differences in scope and control between the cloud subscriber and cloud provider.

    Given central diagram shows the five conceptual layers of a cloud environment which apply to public clouds and other deployments models

    The arrows at the left and right of the diagram denote the approximate range of the cloud provider’s and user’s scope and control over the cloud environment for each service model.

    Cloud subscriber’s extent of control over the system is determined by the level of support provided by the cloud provider. Higher the support by cloud provider lower is the scope and control of the subscriber. Physical elements of cloud environment are shown by two lower layers of the diagram. These physical elements are completely controlled by cloud provider irrespective of the service model. The facility layer which is the lowest layer comprises of Heating, ventilation, air conditioning (HVAC), power, communications, and other aspects of the physical plant whereas hardware layers comprises of network , storage and other physical computing infrastructure elements

    The logical elements of a cloud environment is denoted by other layers

    The virtualized infrastructure layer lead to software components, such as hypervisors, virtual machines, virtual data storage, and supporting middleware elements required to setup a capable infrastructure to establish efficient computing platform

    While virtual machine technology is commonly used at this layer, other means of providing the necessary software abstractions are not precluded. Similarly, the platform architecture layer entails compilers, libraries, utilities, and other software tools and development environments needed to implement applications. The application layer represents deployed software applications targeted towards end-user software clients or other programs, and made available via the cloud.

    Iaas ans Paas as services are very close and difference between them is quite vague. Basically these are distinguished by the kind of support environment, level of support and control allocation between cloud subscriber and cloud provider. Main thrust of cloud computing is not only limited to single organization but also extends as a vehicle for outsourcing various components as public cloud.

    Delete the reason of the event. Position the latest clean back up (to prepare for the computer mending)

    Step 5: Recovery: This phase ensures that the system is returned to a fully operational status. The following steps should be taken in the recovery phase: Restore the system.

    Authenticate the machine

    The machine will be re-established then there should be the process of verification of the operations. After this the machine should be reverse to its normal behaviour. Organisation can take decision on leaving the monitor offline when the system is operating and patches installation.

    Watch the computer.

    When the monitor is reverse to online, it start the system for backdoors which avoids findings.

    Step 6: Follow-Up: This stage is significant for recognizing the message delivered and it will reduce the future happenings.

    Build the explained event report and gives the duplicates to the management. The operating unit’s IT security Officer and the Department of Commerce’s IT Security Program Manager. Provide the optional alteration to the management.

    Execute the accepted activities.

    Post-Incident

    If the organization has a post-incident lessons learned process, they may want the cloud vendor to be involved in this process. What agreements will the organization need with the cloud provider for the lessons learned process? If the cloud provider has a lessons learned process, does management have concerns regarding information reported or shared relating to the organization? The cloud vendor will not be able to see much of the company’s processes, capabilities or maturity. The company may have concerns regarding how much of its internal foibles to share. If there are concerns, get agreement internally first, then negotiate them, if possible, and have them written into the contract. If the vendor will not or cannot meet the customer’s process requirements, what steps will the organization need to take?

    An IH team collects and analyzes incident process metrics for trend and process improvement purposes. Like any other organization, the cloud provider will be collecting objective and subjective information regarding IH processes. As NIST points out, the useof this data is for a variety of purposes, including justifying additional funding of the incident response team. Will the organization need this IH process metric data from the provider to enable a complete understanding of the integration area in case the organization ever has a need to bring the cloud function back in-house? Will the organization need this data for reporting and process improvement in general? The use of this data is also for understanding trends related to attacks targeting the organization. Would the lack of this attack trend data leave the organization unacceptably exposed to risk? Determine what IH process metric data is required by the team and write it into the contract.

    The organization will need to decide if they require provisions with the cloud provider regarding their evidence retention policies. Will the vendor keep the evidence long enough to meet the organization’s requirements? If not, will the organization need to bring the cloud vendor’s evidence in-house? Will the vendor allow the customer to take custody of the evidence? If the vendor retains the evidence longer than the customer policies dictate does this work create risk for the customer? If so, what recourse does the customer have? Legal counsel will need to provide direction in this area in order to ensure compliance with laws for all jurisdictions.

    Background:

    Cloud computing has built on industry developments dating from the 1980s by leveraging outsourced infrastructure services, hosted applications and software as a service (Owens, 2010). In the all parts, the techniques used are not original.

    Yet, in aggregate, it is something very different. The differences provide both benefits and problems for the organization integrating with the cloud. The addition of elasticity and pay-as-you-go to this collection of technologies makes cloud computing compelling to CIOs in companies of all sizes.

    Cloud integration presents unique challenges to incident handlers as well as to those responsible for preparing and negotiating the contract for cloud services. The challenges are further complicated when there is a prevailing perception that the cloud integration is “inside the security Edge or the organisation has been stated in written that a agreement needed the supplier to be safe, this must be sufficient.

    This sort of thinking may be naïve but, unfortunately, it is not rare. The cloud provider may have a great deal of built in security or they may not. Whether they do or not, incident handling (IH) teams will eventually face incidents related to the integration, necessitating planning for handling incidents in this new environment.

    The impacts of cloud integration warrant a careful analysis by an organization before implementation. An introduction of a disruptive technology such as cloud computing can make both definition and documentation of services, policies, and procedures unclear in a given environment. The IH team may find that it is helpful to go through the same process that the team initially followed when establishing their IH capability.

    Security Incident

    The term ‘security incident’ used in this guideline refers to any incident related to information security. It refers to information leakage that will be undesirable to the interests of the Government or an adverse event in an information system and/or network that poses a threat to computer or network security in respect of availability, integrity and confidentiality. On the other hand, the worse incidents like natural calamity, power cuts and data line failure. . are not within the scope of this guideline, and should be addressed by the system maintenance and disaster recovery plan.

    Examples of security incidents include: unauthorized access, unauthorized utilization of services, denial of resources, disruption of services, compromise of protected data / program / network system privileges, leaks of classified data in electronic form, malicious destruction or modification of data / information, penetration and intrusion, misuse of system resources, computer viruses and hoaxes, and malicious codes or scripts affecting networked systems.

    Security Incident Handling

    Security incident handlingis a set of continuous processes governing the activities before, during and after a security incident occurs. Security incident handling begins with the planning and preparing for the resources, and developing proper procedures to be followed, such as the escalation and security incident response procedures.

    When a security incident is detected, security incident response is made by the responsible parties following the predefined procedures The safety events gave the response which is representing the actions accepted out to handle the safety events. These are mainly helpful to re-establish the common operations.

    Specific incident response teams are usually established to perform the tasks of making security incident response.

    When the incident is over, follow up actions will be taken to evaluate the incident and to strengthen security protection to prevent recurrence. The planning and preparation tasks will be reviewed and revised accordingly to ensure that there are sufficient resources (including manpower, equipment and technical knowledge) and properly defined procedures to deal with similar incidents in future.

    Cloud Service

    The outlook on cloud computing services can vary significantly among organizations, because of inherent differences These events as its main aim, assets held and open to the domestic risks faced and risk bearable.

    For example, a government organization that mainly handles data about individual citizens of the country has different security objectives than a government organization that does not. Similarly, the security objectives of a government organization that prepares and disseminates information for public consumption are different from one that deals mainly with classified information for its own internal use. From a risk perspective, determining the suitability of cloud services for an organization is not possible without understanding the context in which the organization operates and the consequences from the plausible threats it faces.

    The set of security objectives of an organization, therefore, is a key factor for decisions about outsourcing information technology services and, In specific, in order to make genuine decisions related to industries sources about the public cloud. The cloud calculating particular servicer and the service arrangements for the organization.

    There are lot of things which works for one industry but not for other.

    Not only this some pragmatic thoughtfulness. Many industries will not afford economically to save all calculative sources and possessions at all

    highest degree possible and must prioritize available options based on cost as well as criticality and sensitivity.

    When keeping the strong advantages of public cloud computing, it is indispensable to focus of safety. Significantly the safety of industry security goals is of major concern, so that the future decisions can be made accordingly. Finally the conclusion on the cloud computing rely on the risk analysis of the trade included.

    Service Agreements

    Specifications for public cloud services and service arrangements are generally called Service Level Agreements (SLAs). The SLA presents the thoughtfulness among the cloud subscriber and cloud provider related to the known range of services. This is to be delivered in the range that the servicer is not able to provide at different range defined. There are typical forms of a part of the different levels of services. The specific is the overall services contract or the services agreement.

    The terms of service cover other important details such as licensing of services, criteria for acceptable use,

    Provisional procrastination, boundaries of all responsibility, security policies and alterations in that period of service.

    The main aim of this report is the period of SLA which is utilize for the services agreement in its entity. There are two types of SLAs exists: i.e. which is non defined and non negotiable contract the other is negotiated agreement.

    Non-variable contracts is the many ways on the basis for the financial level which is enjoyed by the public cloud computing. The terms which are agreed fully by cloud provider but with some offerings, the service provider has also the capability to do the changes. Negotiated SLAs are more like traditional information technology outsourcing contracts.

    These SLAs can be employed to deal with corporation’s apprehension about technical controls, procedures, security procedures and privacy policy such as the vetting of employees,data ownership and exit rights, isolation of tenant applications, data encryption and segregation, tracking and reporting service effectiveness, compliance with laws and regulations (e.g., Federal

    Information Security Management Act), and the deployment of appropriate products following international or national standards (e.g., Federal Information Processing Standard 140-2 for cryptographic modules).

    A negotiated SLA for critical data and application might require an agency

    A negotiated SLA is less cost effective because of the inherent cost of negotiation which can significantly disturb and have a negative impact on the economies of scale, which is main asset a non-negotiable SLA bring to the public cloud computing. Result of a negotiation is based on the size of the corporation and the magnitude of influence it can exert.

    Irrespective of the type of SLA, it is very necessary to obtain pertinent legal and technical advice to make sure terms of service meets the need of the organization.

    The Security Upside

    While the biggest obstacle facing public cloud computing is security, the cloud computing paradigm provides opportunities for thinking out of the box solutions to improve overall security of the corporation. Small corporations are going to have the biggest advantage from the cloud computing services as small companies have limited staff and infrastructure support to compete with bigger organization on fronts of technology and economies of scale.

    Potential areas of improvement where organizations may derive security benefits from transitioning to a public cloud computing environment include the following:

    Staff Specialization.

    Just like corporations with large-scale computing facilities, cloud providers provides an break to staff toto specialize in security, privacy, and other areas of high interest and concern to the organization. Increases in the scale of computing induce specialization, which in turn allows security staff to shed other duties and concentrate exclusively on security issues. Through increased specialization, there is an opportunity for staff members gain in-depth experience, take remedial actions, and make security improvements more readily than otherwise would be possible with a diverse set of duties.

    Platform Strength. The structure of cloud computing platforms is typically more uniform than that of most traditional computing centers. Greater uniformity and homogeneity facilitate platform hardening and enable better automation of security management activities like configuration control, vulnerability testing, security audits, and security patching of platform components. Information assurance and security response activities also profit from a uniform, homogeneous cloud infrastructure, as do system management activities, such as fault management, load balancing, and system maintenance. Many cloud providers meet standards for operational compliance and certification in areas like healthcare (e.g., Health Insurance Portability and Accountability Act (HIPAA)), finance (e.g., Payment Card Industry Data Security Standard (PCI DSS)) and audit (e.g., Statement on Auditing Standards No. 70

    Resource Availability. The scalability of the cloud computing facilities permits the greatest consideration. Unemployment and calamity healing capability is building into the cloud computing surroundings. The different sources ability would be utilizing for better flexibility while facing higher demands or divided rejection of servicer and for faster improvement from Severe events

    When any event happens, the occasion survived again to collect the data. The large data is easily available with good explanation and less effect on construction. On the other hand the pliability might be having different results. For Instance: a non successful person divided the rejection of service attackers which can consume fast.

    Support and Improvement.

    The encouragement and revival strategy and processes of a cloud services might be better than that of the industry. In case the different duplicates are maintained in the assorted natural features can be healthier. Information stored within the cloud would be easily available which is easy to store and highly reliable. In different situation it proved to be maintained in a traditional information centre. In such situation, cloud services could means for offsite encouragement data collection. Mainly the network performance on the net and the usage of the data involved are preventing the issue which impacted the re-establishment. The structure of a cloud solution spreads to the consumer at the service endpoints. This utilizes to access the hosted submission. Cloud consumer is based on browser and on application. However the main calculative sources need to be held by the cloud provider. Consumer is normally low weight calculation and easily handled. The laptops, notebook and net books are well embedded devices like smart mobile phones, tablets and personal digital help.

    Information Awareness.

    Information prepared and developed in the cloud would be able to show low risk to the industry. There are lot of risk involved in the industry, different information are transferring on various systems. Portable systems or transferrable media is out in the field, where the loss of devices and theft occurs frequently. Many industries have made the evolution to handle the availability to the industry. So many industries have already made the evolution to hold the availability to the organizational information.

    In addition to calculating the stage or alternative for domestic submission and public cloud services like target on providing security and safety to other calculating surroundings.

    Information Midpoint Familiarize.

    Cloud services would be able to utilize the safety information centres. For instance: e-mail can be t