Control

See Page 9 of PDF

Don't use plagiarized sources. Get Your Custom Essay on
Control
Just from $13/Page
Order Essay

A Lean Six Sigma Case Study

If you want to prosper for a year, grow rice. If you want to prosper for a decade, plant trees. If

you want to prosper for a century, grow people — a wise old farmer reflecting back on a life

of toil in the soil

PROJECT DESCRIPTION

The following Lean Six Sigma case study will reflect a real-life healthcare problem with

Continuous Improvement and Lean Six Sigma Tools to show how some of the tools are put into

place in the real world. The object of this project is your appropriate use of Lean Six Sigma

tools and the data provided. Project completion is required to pass the course. Project

assignments are assessed on a Complete/Incomplete basis. Each Phase of the DMAIC process in

the Project has an assignment. Assignments must be submitted to the instructor by the end of the

week corresponding to the DMAIC Phase. The exception is the week 8 or Control phase

assignment which needs to be submitted early in the last week of the course to allow grading.

The Instructor will determine if the student has submitted a Project assignment that is Complete.

If the assignment is Incomplete, there will be interaction between the Instructor and student until

the assignment is Complete. All project assignments must be assessed as Complete for the

student to pass the course. An Incomplete project will result in a Failing grade for the course.

Student Case Study

Case Study:
Process Improvement –
Reduction in Wait Time for
Patients in a Doctor Office

Executive Summary
Dr. Deasley is a popular Doctor in Tampa, Florida specializing in primary care. He spends a great deal of
time with each of his patients, typically, 45 minutes to one (1) hour. As a result, there are many other
patients waiting in the waiting room who become impatient at the long wait time. The Doctor has hours
every day except Wednesdays. He has Hospital Clinic on Wednesdays and does not have office hours.
Dr. Deasley’s office hours are 7:30 AM to 5:30 PM (patients can be scheduled up until 5:30 PM) on
Tuesdays and Thursdays and 9:30 AM to 7:30 PM (patients can be scheduled up until 7:30 PM) on
Mondays and Fridays. He does Hospital Rounds from 6:00 AM to 8:00 AM. He conducts patient call
backs between patients, during his lunch hour and after office hours. We triage the calls so he gets back
to more seriously sick patients first. However, sometimes he doesn’t call back non-emergencies until the
next AM. Dr. Deasley is becomes overbooked because he likes to have 10 patients scheduled per day.
However, due to time constraints he frequently needs to rebook patients he is unable to see due to time
constraints.

Dr. Deasley’s patients and staff love him for his patience and attention. But, several long term patients
have left his practice because of this issue. This has resulted in a decrease in revenue for the office. In
addition, his office is experiencing a rather high rate of staff turnover. Staff are responsible for booking
patients and managing the workflow in the office. When backlogs occur and patients become annoyed
about wait times, the staff usually experience the brunt of the patient dissatisfaction, which effects staff
morale. Each time the office hires replacement staff, it takes a significant amount of time to train new
employees and it is costly to advertise and recruit competent staff. Dr. Deasley is very concerned about
both his patients and staff.

His Office Manager, Ms. Smith, who recently was employed at Memorial Hospital of Tampa, participated
in several Continuous Improvement Projects at the hospital. She is a certified Lean Six Sigma Green Belt.
As a result, Ms. Smith has suggested a plan to the doctor to conduct a Lean Six Sigma project with the
objective of Reducing Patient Wait Time and Improving Office Workflow. Ms. Smith explained the
project improvements and objectives. Dr. Deasley has approved the project. As an initial step, the Office
Manager has established her team. Each employee has a role in the project. Based on patient
complaints and the doctor’s requirements, they have some initial Voice of Customer (VOC). Patients
would like to see the Doctor within 10 minutes of arriving and spend no more than 30 minutes in the
office total for routine visits. The Doctor would like to see 15 patients per day. These changes need to
be made within 3 months in order to minimize patient dissatisfaction, stop patients leaving the practice
due to long wait times and rescheduling and improve employee morale and retention.

Define
1. Complete a Project Charter with all of the required Information

a. Please write the Problem Statement:
b. Please write the Goal Statement utilizing S.M.A.R.T. objectives (Specific,

Measureable, Attainable, Relevant and Time Bound):
c. What is in Scope? What is out of Scope?
d. Who are Key Stakeholders?
e. What are key Milestones?

2. Please complete a High Level “As Is” Process Map.
3. Please create a SIPOC of the process based on the information that you know. Feel free to

use your imagination for this.
a. Describe methods for collecting Voice of the Customer. (SEE APPENDIX A for VOC)

4. Please create an Affinity Diagram or List based on VOC so you can identify Customer
“NEEDS” for CTQ Tree

5. Please create a Critical to Quality Tree utilizing the Voice of the Customer. Identify the
Needs, Drivers and Requirements or Metric to needed to meet these needs

Conclusion of Define: The output of the DEFINE stage is a PROJECT CHARTER (PC) and
STAKEHOLDER ANALYSIS (SA). The PC shall include a Problem Statement with Goals
utilizing S.M.A.R.T. methodology to address the problems identified. The Goal shall be
aligned with the customer CTQ Requirements. A clearly defines SCOPE is included in
the PC. What is IN SCOPE and what is OUT OF SCOPE? Your Team is identified and
Roles & Responsibilities are defined. A SIPOC Map is completed. An “As Is” Process
Map is completed in order to better visualize the Work Flow in the current process. The
DEFINE Phase provides for identification of the VOC and CTQs, their needs, Drivers and
Requirements. The student will have evaluated and Affinitized the VOC. CTQ trees
were created to identify key requirements for meeting the customer’s needs. The
Project Team should have a list of external Key stake Holders, if applicable, e.g.,
Hospital Radiology, who may be impacted by process changes within the Doctor’s
medical practice. If the Doctor’s staff schedule testing appointments for patients and
are required to make frequent changes, this has an impact on the department or
entity conducting the testing. The Project Team will have met with Dr. Deasley for his
approval to proceed and now has a baseline to begin the Measure phase.

Measure
1. Based on Customer requirements the project team collected initial data. Use Pareto Analysis

of # occurrences data to determine the 5 factors which are causing over 95% of the problem
with wait time. You need to determine the ‘biggest contributors to the problem. One tool to
accomplish this is the Pareto Chart. You need to know if it is reasonable to assume that
these five ‘parameters’ are normally distributed. (SEE APPENDIX B)

a. Based on Pareto Analysis what are the focus areas?
b. Set up appropriate methods for tracking focus areas. You will need to track # of

occurrences of each category and actual for measuring the ability to meet the
requirements.

2. Define your Data Collection Plan. Include the types of data you will be collecting (Discrete or
Continuous), Why? (In many instances you will have a mix of both types of data depending
on the Data source.

3. Based on the data collected Construct FIVE (5) histograms for the below data sets. (SEE
APPENDIX C) for data sets

a. Interpret each of the histograms to determine whether the assumption of normality
is reasonable.

b. If the data are not approximately normally distributed, why not?
4. The team also believed there was a Motorola shift during the process. Please describe the

Motorola Shift and potential causes that they could have experienced the shift.
a. Calculate the PPM/DPMO for this process and determine the baseline sigma with

the Motorola shift.
5. Calculate the Process Performance, Pp and Ppk, based on the current process. Student will

be able to compare current Process performance to Capability Study performed for process
improvements. Tint: drawing a picture of the data based on a Normal Curve may help
student visualize if data is skewed when evaluating population distribution. Use UCL = 60
minutes and LCL = 0 Minutes. In Healthcare LCL will frequently be “O”

Conclusion of Measure: A Data Collection Plan was created. Data was taken of as many
parameters as possible before changing any variables. Key Data has been provided for your
use as directed in the instructions above. Pareto harts have been created and based on the
VOC. The 5 Largest Contributing Factors will have been identified. These should have aligned
with the data provided. A method for tracking data to capture for analysis should have been
identified even if the actual data is already provided. Then from the categories and data

“collected”, 5 Histograms should have been created along with the narrative for Analysis,
specifically related to determination if data was normally distributed. An explanation of the
Motorola Shift is provided. PPM/DPMO is calculated. Pp/Ppk are calculated and current
process Sigma Level is defined. It was found that Dr. Deasley was spending more time with his
patients than necessary. The process needs to be analyzed based on the data.

Analyze
1. Create a Stem and Leaf Plot for the downtimes that were captured from the patient wait times

in the waiting rooms. (SEE APPENDIX D for data set)
2. Calculate Measures of central Tendency with Downtime data. What can you interpret from

these measures? Please document a conclusion (SEE APPENDIX D for data set)
3. Calculate Measures of Dispersion with Downtime data. What can you interpret from these

measures? Please document a conclusion (SEE APPENDIX D for data set)
4. Two individual staff members were being observed performing identical activities in the

Doctor’s office. 25 random samples were taken for each staff member. One of the Medical
Assistants is a new employee. Medical Assistant #1 has been with Dr. Deasley for several years.
Medical Assistant #2 is a new employee and has been with this medical practice for 9 months.
We want to determine how Assistant 2 performs when compared to Medical Assistant #1. Since
she is a new employee. (SEE APENDIX E for data sets)

5. Please provide the following information based on your analysis of the two Medical Assistants
a. Medical Assistant #2 Average
b. Medical Assistant #2 Standard Deviation
c. Null Hypothesis
d. Alternative Hypothesis
e. T-Test Statistic
f. Critical Value
g. Statistical Conclusion for the null and alternative hypothesis.

Conclusion of Analyze: Stem and Leaf Plots were created from Downtime data provided, Measures of
Central Tendency were also determined using Downtime data and an interpretation of the results
were made. Data was analyzed to review if different staff members were performing similarly or not.
Students should have established a Null Hypothesis and Alternative Hypothesis from the data for the 2
staff members. An appropriate test was performed and conclusions made based on the outcome.

IMPROVE
1. A staff member has been stating for months that there is a correlation between the Room

Availability and the Patient arrival time. Should the Office Manager have listened to this staff
member’s observation? After completing items 2 through 5, provided your thoughts on staff
observations and how they might have achieved Office Manager Buy-In sooner.

2. Construct a scatter diagram and calculate the correlation coefficient to see if she is correct. SEE
APPENDIX F for data set

a. Is there strong correlation between room availability and patient arrival time?
b. IF there is strong correlation, is it positive or negative? (Answer with positive, negative

or N/A)
c. What is the correlation coefficient between the two variables? (Use 6 decimal places).

What does this mean?
3. Discuss the 8 Deadly Wastes (MUDA) of the process.
4. Create a Fishbone Diagram. List Potential Root Causes. Narrow Potential Root Causes to Key

Root Causes. Explaining some of the key Root causes.
5. Discuss Improvements that you would suggest based on findings from FISHBONE Analysis.

Conclusion of Improve: A Scatter Plot was constructed and a Correlation completed. The
determination of whether the 2 factors Correlate based on a Correlation Coefficient determination is
stated and comments on whether the correlation is Positive or Negative are included. 8 Wastes were
evaluated and identified where applicable. A FISHBONE DIAGRAM was created and many ideas were
brainstormed for Potential Root Cause. These were then narrowed to the critical few Root Causes.
Many improvement suggestions were made.

CONTROL
An I-MR chart was plotted for the Doctor’s office to ensure the specifications were performing as
planned and the patients and Doctors were satisfied.

1. Please indicate if the control chart is stable and if any Shewhart Rules have occurred.

2. A normality test was conducted. Please advise if the data is normal.

3. A capability study was completed. Please advise if the process is stable and any analysis you

find is relevant.

4. Please complete a Control and Monitoring Plan for the project.
5. Create a Dashboard which the office can utilize to monitor the performance of the

improvements as well as supporting the sustainability of the improvements

Conclusion of Control: Conclusion regarding the stability of the Control Chart was made and any
violations of the Shewhart Rules were noted. Students then observed the WET LAB TESTING and
discussed the Normality of the data. A Capacity Study was done presumably using data from
improvements made and analysis of the Mini Tab output was discussed. A Control and Monitoring
Plan was created to ensure monitoring of improvements for Sustainability. Final a Dashboard was
developed to be used for staff to visually track their performance and for discussion with Dr. Deasley.
We have collected data after making many improvements to see if the process is now stable. We will
continue to monitor our progress and follow the control plan.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

 Please make final conclusions of the project.

APPENDIX A: VOICE OF THE CUSTOMER

Feedback from Patients:

I wait too long. I only have an hour for Lunch. I make my appointments specifically at Lunch
time because I can’t come after work.

I like to come very early and be one of Dr. D’s first patients. If I am not his 1st, I end up waiting
and am late for work. My company is very strict about being on time.

I wouldn’t mind if the doctor spent less time with me. I only usually come for an Annual
Checkup and a Flu shot. If I feel really sick, I call the office. When I broke my arm last year, the
doctor sent me right to the hospital. You guys made the arrangements for my X-Ray so I didn’t
need to wait.

I can’t be late when I come in the afternoon. I need to pick my daughter up from school. If I
come in the afternoon, can you make it a short visit?

The doctor spends so much time asking me questions, can’t he look at my chart before I get
into the exam room?

The last time I was here, you put me in a room with someone else’s clothes. The woman had
gone to the Ladies’ room and came back to get dressed. I had to wait in the hallway.

Feedback from Staff

We need to organize the exam rooms. Dr. Deasley is always looking for something and I need to
go find it.

We can’t have multiple people at the Front desk assigning patients to rooms. They don’t always
assign patients to the right room and equipment is not available

Dr. D keeps taking equipment with him from room to room,

The patients are not getting here early enough to get them ready for the doctor. He like to have
their Blood Pressure, Weight and Temperature done before he comes in.

Patients keep arriving the last minute, then they get angry because they miss their appointment
and need to wait.

I hope I never have to reschedule Mrs. Smyth for a new appointment because the doctor
couldn’t see her. She was practically screaming at me.

We had 2 patients, Mrs. Jones and Mr. Thomas ask for their records to be sent to a new
doctor’s office. That is the 4th time that has happened this year and we are only ½ way through
the year.

The new Medical Assistant was complaining because she said there is too much chaos here. I
think she might be sorry she came her. I hope she doesn’t go back to the hospital. It takes so
much time to find good people and train them.

Feedback from Doctor

I don’t always have the instruments I need in the Exam Room. I need to have my Assistant go
find what I need. I’ve started taking Instruments with me to my next patient only to find 3 of
the same instrument I am carrying in the next Exam Room.

I have seen several patients waiting in the hall outside the Exam Room. I don’t like that
situation. We need to stop this practice.

I see some staff running around like crazy and others sitting around appearing to have nothing
to do.

I am not one of these “hands off’ doctors, I like to spend time with my patients. But sometimes
a patient will sit there with nothing to say and another patient will have a long list of issues.

If this improvement project is successful, I would like to see 15 Patients a day. We need to keep
operating costs in mind. We need to keep our equipment up to date and I need to ensure we
plan for salaries and bonuses at year end.

I notice we have had 3 people leave within the past 18 months. I would like to understand why.
It is very expensive to recruit staff and it takes time before they are proficient in their jobs. The
team we have now is very good. I would like to keep all of them. We do monitor salaries and
compare with market standards so I know our salaries and benefits are competitive.

Feedback from Other Sources

Radiology Department is complaining because they state we make too many changes to the
patient appointments.

The Laboratory department is complaining because our patients are coming for testing outside
their assigned appointment time and too late in the day.

APPENDIX B: Based on VOC data to be used to construct CTQ’s. Project Team will
identify key focus areas in Doctor’s Office using Pareto Diagram. These focus
areas will then be monitored as defined in Data Collection Plan.

Time the Doctor was spending with Patients – 79

Number of times Dr arrives late – 4

Proper Medical Devices not Available – 30

Number of times patient is left in the hallway – 17

Rooms Available at Doctor’s Office -22

Number of times staff arrive late – 3

Staffing of Doctor’s Office -41

Number of times scheduling changes were made for patient testing – 15

Number of times patient had to be rescheduled for Dr visit – 10

Arrival Time of Patients – 52

APPENDIX C: Data set to be used to construct 5 Histograms

Proper
Medical
Devices

N/A

Rooms
Available

at Dr.
Office

Staffing at
Dr. Office

Arrival
Time of
Patients

Time Dr.
Spends

with
Patients

10.82 7.45 0.5502 172 48

10.82 7.55 0.5522 169 34

10.86 7.67 0.546 177 23

10.87 7.65 0.5462 170 32

10.84 7.62 0.5491 174 19

10.85 7.59 0.5486 175 37

10.86 7.6 0.5428 167 20

10.87 7.52 0.5532 171 47

10.89 7.49 0.5472 168 27

10.8 7.54 0.5522 172 31

10.81 7.52 0.5494 168 44

10.89 7.61 0.5519 163 27

10.81 7.52 0.5509 174 61

10.9 7.61 0.5412 169 17

10.87 7.53 0.5518 171 26

10.86 7.57 0.5523 172 50

10.85 7.59 0.5415 172 11

10.85 7.55 0.5477 168 53

10.86 7.61 0.553 169 18

10.86 7.54 0.55 166 75

10.83 7.57 0.5437 172 27

10.89 7.51 0.5463 168 36

10.76 7.63 0.5566 174 40

10.78 7.5 0.541 175 30

10.86 7.58 0.5542 164 23

10.9 7.55 0.5569 173 15

10.83 7.51 0.5432 168 15

10.82 7.5 0.5487 170 35

10.87 7.59 0.5537 173 45

10.88 7.58 0.541 170 25

10.67 7.64 0.5554 173 42

10.72 7.48 0.5521 167 64

10.65 7.57 0.5532 169 23

10.7 7.46 0.5563 172 53

10.67 7.53 0.5508 165 50

10.65 7.6 0.5527 170 16

10.6 7.49 0.5546 169 41

10.66 7.65 0.5478 170 7

10.61 7.55 0.5468 165 31

10.69 7.55 0.5566 172 18

10.71 7.51 0.5531 168 53

10.66 7.49 0.5482 173 34

10.64 7.49 0.5473 172 37

10.62 7.49 0.5442 170 80

10.63 7.56 0.5491 176 19

10.67 7.59 0.5596 175 26

10.62 7.47 0.5491 170 13

10.62 7.58 0.5507 169 18

10.63 7.55 0.556 177 36

10.65 7.47 0.5428 178 7

10.68 7.63 0.5488 172 34

10.68 7.47 0.5531 171 28

10.63 7.68 0.5483 171 44

10.68 7.55 0.5431 171 18

10.58 7.47 0.545 177 23

10.59 7.59 0.5392 172 17

10.64 7.57 0.5512 170 25

10.64 7.53 0.5465 169 15

10.68 7.58 0.5479 164 23

10.6 7.6 0.5452 174 21

Upper Spec 11 7.66 0.56 180 60
Lower Spec 10.5 7.45 0.54 165 0

Target 10.75 7.55 0.55 170 20

APPENDIX D: Data represents Wait Time in minutes beyond their scheduled
Appointment Time for the last 70 patients. Use to create Stem and Leaf Plots.

PATIENT
WAITING

TIME

PATIENT
WAITING
TIME
PATIENT
WAITING
TIME
PATIENT
WAITING
TIME
PATIENT
WAITING
TIME
PATIENT
WAITING
TIME
PATIENT
WAITING
TIME

16 15 19 48 14 47 21
16 17 16 45 80 20 46
17 13 26 50 6 71 48
37 47 17 49 49 47 20
47 11 65 63 48 50 64
32 47 15 17 47 95 16
48 38 17 22 48 47 44
21 17 48 10 52 20 82
18 20 16 18 46 50 51
75 49 44 51 48 35 58

APPENDIX E: Data set for determining performance for Medical Assistant #2. The
historical mean for Medical Assistant #1 was .0126.

MEDICAL ASSISTANT #2
Data

MEDICAL ASSISTANT #2
Data

0.009
0.015

0.010
0.011

0.011
0.011

0.011
0.012

0.010
0.008

0.011

0.011

0.013

0.008

0.012

0.010

0.013

0.014

0.012

0.009

0.014
0.011

0.015

0.011
0.012

APPENDIX F: This is the data set for evaluating Correlation between Room
Availability and Patient Arrival

Room # Availability Patient Arrival Time
154 0.554
153 0.553
152 0.552
152 0.551
151 0.549
151 0.549
151 0.548
151 0.548
151 0.548
151 0.547
151 0.547
151 0.547
151 0.547
151 0.547
151 0.547
151 0.546
150 0.546
150 0.546
150 0.546
150 0.546
150 0.546
150 0.545
150 0.545
150 0.545
149 0.545

2

>Six Sigma Process Map

MAP TEMPLATE

PROCESS

SIX SIGMA

PROCESS
ANALYSIS COMPLETED BY DEPARTMENT(S) DATE COMPLETED
K E Y COPY AND PASTE
BLANK ICONS
BELOW
LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT

STEP

START / END

INPUT / OUTPUT

DOCUMENT

FLOWCHART LINK

CONNECTORS

https://goo.gl/wZizs0

Pareto Chart Template

PARETO CHART TEMPLATE The Pareto principle states that, for many events, roughly

8

0% of the effects come from 20% of the causes.

SORT

DATA

DESCENDING / HIGH-TO-LOW

C A U S E E F F E C T CUMULATIVE CATEGORY / DESCRIPTION COUNT PERCENTAGE Issue

1 7 4 2

3

% Issue 2 5

8 42% Issue 3 4

9 57% Issue 4 33 6

8% Issue 5 28 76% Issue 6 26 85% Issue 7 22 91% Issue 8 16 97% Issue 9

8

99% Issue

10

3

100%

LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT

COUNT Issue 1 Issue 2 Issue 3 Issue 4 Issue 5 Issue 6 Issue 7 Issue 8 Issue 9 Issue 10 74 58 49 33 28 26 22 16 8 3 CUMULATIVE PERCENTAGE Issue 1 Issue 2 Issue 3 Issue 4 Issue 5 Issue 6 Issue 7 Issue 8 Issue 9 Issue 10 0.2334384858044164 0.41640378548895901 0.57097791798107256 0.6750788643533

12

32 0.76340694006309151 0.8454258675078864 0.9

14

82649842271291 0.96529968454258674 0.99053627760252361 1

https://goo.gl/v5dcnZ

Control Plan Template

LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT
CONTROL PLAN TEMPLATE
SOP

# PROCESS STEP WHAT’S CONTROLLED INPUT OR OUTPUT SPECIFICATION CHARACTERISTIC SPECIFICATIONS METHOD OF
MEASUREMENT
METHOD OF
CONTROL
SAMPLE SIZE FREQUENCY WHO / WHAT

MEASURES RECORDING
LOCATION
DECISION /
CORRECTIVE ACTION

https://goo.gl/6pfVZY

Voice of Customer Six Sigma

#

1
2
3
4
5
6
7
8
9
10
12
14

LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT

VOICE OF CUSTOMER (VOC) SIX SIGMA TEMPLATE
ID CUSTOMER IDENTITY VOICE OF THE CUSTOMER KEY CUSTOMER ISSUE(S) CRITICAL CUSTOMER REQUIREMENT
Who is the customer? What did the customer say? What does the customer need? What resulting action is required?
11
13

https://goo.gl/p27jL8

Tree Diagram Template

MEASURES

LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT

TREE DIAGRAM TEMPLATE
OBJECTIVE / PRIMARY MEANS / SECONDARY MEANS / TERTIARY MEANS / FOURTH LEVEL /
VISION LONG-TERM SHORT-TERM TARGETS

DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA

https://goo.gl/PpiO3g

Flow Chart

Flow Chart

web site.

Learn About Quality

Quality Tools
Description Instructions Learn More
This template allows the user to develop a process flow chart, also called process flow diagram. A detailed discussion can be found at www.ASQ.org Begin the flow chart with a Start/End symbol. All symbols snap to the grid for easy alignment. To learn more about other quality tools, visit the ASQ

Learn About Quality
Connectors link process steps and automatically snap to symbols.
Learn About Flow Charts
End with a Start/End symbol. The delete key will remove a selected symbol
Re-set the print area for larger charts

Step
Connector
Decision
Flowchart Link
Input/ Output
Document
Start / End
Text
Receive Order
Enter Order in System
Credit Check
OK?
Refuse Order
Check Inventory
OK?
Check Materials Needed
Yes
No
Yes
No
OK?
Order Material
Text
Text
Text
Text
Text
Text
Yes
No
Learn About Quality

Learn About Flow Charts

Learn About Flow Charts

About This Template

This template was written for the American Society for Quality by
Stat Aids
Your feedback is welcome and encouraged. Please e-mail to:
Stat_Aids@yahoo.com

Stat_Aids@yahoo.com

Stat Aids

2

>

DMAIC

_Roadmap Lean Six

Sigma

DMAIC Roadmap Purpose Key

Tool

s Key

Outputs Define To establish a quantified problem statement, objective and business case that will become the foundation to your Six Sigma project. Conduct stakeholder analysis, select team members and kick-

of

f your project. Primary Metric Process

Map Project

Charter Project Plan *

Process

Map
* Gather VOC
* Translate VOC to

CTQ

‘s
* QFD/HOQ
* COPQ
* Primary &

Secondary Metric

s
* Establish Project Charter
* Stakeholder Analysis
* Team Selection
* Project Plan Measure Refine your understanding of the process. Assess process capability relative to customer specifications. Validate measurement systems. Brainstorm potential x’s. C&E

SIPOC

FMEA Cp

k

* Early

Y

=f(x) Hypothesis
*

Detailed Process Map

* SIPOC
*

Cause

&

Effect

Diagram
* Cause & Effect Matrix
* FMEA
* Basic Statistics
*

Normality Test

*

Capability Analysis

*

Gage R&R Analyze Conduct data collection and planned studies in order to eliminate non-critical x’s and validate critical x’s. Establish a stronger and quantified Y=f(x) equation.

Normality Test

ANOVA 2

Sample

t-test Equal Variances

* Narrowed Y=f(x)
*

1

& 2 Sample t-tests
* 1 & 2 Proportions tests
* Equal variance tests
*

Normality test

s
* ANOVA
* Moods

Median

*

Man

n Whitney
* Paired t-test
*

Chi-Square

d test

Improve Design, test and implement your new process or product under live operating conditions. Pilot solutions if feasible before broadly deploying expensive improvements or products. Pugh Matrix Linear

Regression Binary Logistic Regression DOE * Refined Y=f(x)
* Pugh Matrix
*

Correlation

*

Simple Linear Regression

* Multiple Linear Regression
* Binary Logistic Regression
* Full

Factor

ial DOE
* Fractional Factorial DOE Control Plan, communicate, train and implement your product or process solutions. Ensure control mechanisms are established. Use Poke Yoke, visual controls,

SOP’s

and

SPC

wherever possible. Control Plan

SOP’s

Communication Plan

SPC

* Control Plan
*

Training Plan

* Refined FMEA
* Communication Plan
* Standard Operating Procedures
* Five-S Audit
* Poke Yoke
* Visual Controls
* Statistical Process Control

DMAIC_Project_Checklist

D.M.A.I.C Project Checklist DEFINE IMPROVE 2

Projecct Charter

2

Potential Solutions Developed 2

Business Case

(why is this project important)

2

Potential Solutions Prioritized 3 Problem Statement

& Objective

2

Solution Selected 2

Baseline

Data

(Primary Metric “Y”)

2

Improvement Pilot/Test Plan 2

Target

2

Improvement Pilot/Test Execution 2

COPQ Estimate

2

Improvement Verified 2

Project Team

2

New

Process Capability 2

Project Scope

2

Updated Process Map 2

Project Timeline

2

Solution Implementation Plan 2

Project Constraints/Dependencies 2

Primary Metric Updated 2

High Level Process Map 2

COPQ Revision 2

Customer Requirements Identified

2

Improve

Phase

Report 2

Define Phase Report MEASURE CONTROL 2 Detailed Process Map 2

Full Solution Implementation 2 SIPOC 2

Standard Operating Procedures Developed 3

Data Collection Plan

(Potential X’s)

2 Communication Plan

2

Measurement

Systems Analysis (Primary Y) 2 Training Plan

2

Process Capability Analysis

2

Audit Plan 2

List of Possible X’s

2

Control Charts 2

Prioritized List of X’s to be Analyzed

2 Control Plan
2 Primary Metric Updated 2 Primary Metric Updated
2 COPQ Revision 2 COPQ Revision
2

Measure Phase Report

2

Full Project Report ANALY

Z

E 2

Sources of

Variation

Identified 2

Potential X’s Eliminated 2

Root Causes Confirmed (Critical X’s Identified)

2 Primary Metric Updated
2 COPQ Revision

2

Analyze Phase Report

Project_Prioritization_Matrix

Matrix

:

/

/1

Project Prioritization

):

6 4 4

6

9

9 9 7

7

9 9 7 9 9 7 277

7 9 5 9

7

7

7 9 5 9 11 7 257

1

7 9 5 9 11 7 257

5 7 3 7 9 5

1

5 7 3 7 9 5 191

5 7 3 7 9 5 191

Project #1 3 5 1 5 7 3

3 5 1 5 7 3 125

3 5 1 5 7 3 125

3 5 1 5 7 3 125

Project Prioritization
Date 1

0 13 4
Activity:
Facilitator: John Doe
Business Priorities ROI Duration Cost Resoruce Difficulty Complexity
Weighting (1-

10 8 5 6
Projects Score
Project

# 9 7 27
Project #7
Project #3 11 25
Project #5
Project #1
Project #2 19
Project #8
Project #

12
125
Project #4
Project #9
Project #10

Project_Charter

Project Title: Black Belt Project Champion Executive

Sponsor MBB/Mentor Primary Metric Secondary Metric
Problem Statement Business Case
High Level Project Timeline Constraints & Dependencies Project Risks Other Diagnostics

Phase

Start

Finish Define
Measure
Analyze
Improve
Control
Approval/Steering Committee Stakeholders & Advisors Project Team & SME’s Name Organization

Name Organization

Name

Organization

Scorecard

’15

15

Goal Fcst Actual Goal Fcst Actual Goal Fcst Actual Goal Fcst Actual

6

7

$25.0

.0

$25.0

1

1

$61.0 $58.0

$61.0 $59.0 $59.0 $61.0 $61.0

ERROR:#DIV/0! ERROR:#DIV/0!

0

$10.0

$10.0 $10.0

.0

$40.0

.0

.0

3 3

Open Cases

3 3

3 3

3 3

3 3

3 3

3 3

3 3

1 1

52,800 4,967

1 1

3

9

1,933 195

InnovaNet Basic Scorecard
Calculate
Status Q1 15 Q2’15 Q3 Q4’15 Full Year

20
Current FYF Key Business Metrics Goal Fcst Actual
1.1 0.9 Operating Expense Reduction $15.0 $1

2.0 $8.0 $25.0 $

29 $35.0 $36.0 $2

4.0 $10

0.0 $97.0
0.

96 72 31 48 ERROR:#DIV/0! Customer Satisfaction $6

1.0 $58.0 $57.0 $59.0
Net Income
1.05 OWT $10.0 $0.0 $1

3.1 $

40 $

60 $

63
Operating Metrics
Rec

all
Recall Open Case Dollars
Recall Cases w/Purchasing
Recall Case Dollars w/Purchasing
Legacy Open Cases
Legacy Open Case Dollars
Legacy Cases w/Purchasing
Legacy Case Dollars w/Purchasing
OWT Cumulative Parts Reviewed 31,200 3,802 52,800 4,

967
OWT Cumulative Recovery Groups w/TF 1,

21 18 1,933 195
Status Rules: Current status based on forecst vs. goal for future periods and based on actual vs. goal for past period. FYF status based on full year forecast vs.goal untill the year completes.
Status Conditions: Green >=100% of Goal, Yellow 95%-99% of Goal, Red <95% of Goal
$dollars represented in Millions

&F &D

SIPOC

Process Outputs

S.I.P.O.C. Template
Suppliers Inputs Customers

Start

Step 1

Step 2

Step 3

Step 4

End

Control_Plan

Control Plan
Process: Preparer: Page:

of
Customer: Email: Reference No: Stakeholder: Phone: Revision Date: Business: Owner: Approval:

Process

Process Step CTQ/Metric CTQ / Metric Equation Specification/ Requirement

Measurement

Method

Sample Size Measure

Frequency Responsible

for Metric Link or Report Name Corrective Action Responsible for Action LSL USL

Communication_Plan

Communication Plan Template
Process/Function Name Project/Program Name Project Lead Project Sponsor/Champion
Communication Purpose:
Target Audience Key Message Message Dependencies Delivery Date Location Medium Follow up Medium Messenger Escalation Path Contact Information

Training_Plan

Project Process Project Lead

Sponsor

Status

Training Plan Template
Business Division
Who Where When How Many Key Change/Process Training Medium Supporting Docs Technology Requirements Other Requirements Trainer

XY_Matrix

Date:
Weighting (1-10):

s (X’s)#

Score

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

rate impact, 7=Strong impact).

Cause & Effect Matrix (XY Matrix)
Project:
XY Matrix Owner:
Output Measures (Y’s)* Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8 Y9 Y10
Input

Variable For each X, score its impact on each Y listed above (use a 0,3,5,7 scale)
X1
X2
X3
X4
X5
X6
X7
X8
X9
X10
X11
X12
X13
X

14
X15
X

16
X

17
X18
X19
X20
X21
X

22
X

23
X

24
X25
X

26
X27
X

28
X29
X

30
XY Matrix Premise: The XY Matrix or “Cause & Effect Matrix functions on the premise of the Y=f(x) equation.
*Rate each “Y” on a scale of 1 to 10 with 1 being the least important output measure
#For each X rate its impact on each Y using a 0,3,5,7 scale (0=No impact, 3=Weak impact, 5=

Mode

DPMO

_Sigma_Level

sigma shift

DPMO

Rate

DPMO Yield Defect Rate

1

310000%

3880%

%

1.1

332

668000%

9180%

139000%

3780%

399000%

1860%

487000%

693

3070%

6930%

1.5

14

614000%

3490%

1.7

1.8

568000%

432000%

0850%

9150%

2

500000%

500000%

7700%

2.1

728000%

5880%

806000%

8

108

2.5

2.7

2.8

110000%

1910%

3

3.1

3.3

66000%

7

3000%

3.5

65000%

18000%

1

5000%

3.8 144

44000%

24

3.9 96

4 63

4.1 41

1000%

6610%

4.2 26

6000%

670%

17

50%

4.4 10

4.5 6

1349

0.13490%

4

967

2

0%

1

0%

0%

336

5

4%

0.34

0%

0.2

0%

107

6%

72

20%

5.4 0.067

7%

48

5.5

8%

31

0.021

1%

20

5.7

2%

5.8 0.007

7%

0.004

6 0.002

3.4

Without

1.5 With 1.5 sigma shift
Sigma Level Yield Defect
317310 68.2690000% 3

1.7 697612 3

0.2 69.76

120
271 7

2.8 27.1332000% 660082 3

3.9 66.00820%
1.2 230139 76.9861000% 2

3.0 621378 37.86220% 6

2.1
1.3 193601 8

0.6 19.3601000% 581814 4

1.8 58.18140%
1.4 161513 8

3.8 16.1513000% 5

41 4

5.8 5

4.1
1

336 86.6386000% 1

3.3 50

1349 49.86510% 5

0.1
1.6 109598 89.0402000% 10.9598000% 461139 53.88610% 46.11390%
89130 91.0870000% 8.9130000% 421427 57.85730% 42.14270%
71860 92.8140000% 7.1860000% 382572 61.74280% 38.25720%
1.9 57432 9

4.2 5.7 344915 6

5.5 3

4.4
45500 9

5.4 4.5 308770 69.12300% 3

0.8
35728 96.4272000% 3.5 274412 7

2.5 27.44120%
2.2 27806 97.2194000% 2.7 242071 75.79290% 24.20710%
2.3 2

144 97.8552000% 2.1448000% 211927 78.80730% 21.19270%
2.4 16395 98.3605000% 1.6395000% 184 81.58920% 18.41080%
12419 98.7581000% 1.2419000% 158686 84.13140% 15.86860%
2.6 9322 99.0678000% 0.9322000% 135686 86.43140% 13.56860%
6934 99.3066000% 0.6934000% 115083 88.49170% 11.50830%
5110 99.4890000% 0.5 96809 9

0.3 9.68090%
2.9 3731 99.6269000% 0.3731000% 80762 91.92380% 8.07620%
2699 99.7301000% 0.2699000% 66810 93.31900% 6.68100%
1935 99.8065000% 0.1935000% 54801 94.51990% 5.48010%
3.2 1374 99.8626000% 0.1374000% 44566 95.54340% 4.45660%
966 99.9034000% 0.09 35931 96.40690% 3.59310%
3.4 673 99.9327000% 0.06 28716 97.12840% 2.87160%
465 99.9535000% 0.04 22750 97.72500% 2.27500%
3.6 318 99.9682000% 0.03 17864 98.21360% 1.78640%
3.7 215 99.9785000% 0.02 13903 98.60970% 1.39030%
99.9856000% 0.01 107 98.92760% 1.07240%
99.9904000% 0.0096000% 8197 99.18030% 0.81970%
99.9937000% 0.0063000% 6209 99.37910% 0.62090%
99.9959000% 0.004 4661 99.53390% 0.4
99.9974000% 0.002 3467 99.65330% 0.34
4.3 99.9983000% 0.0017000% 2555 99.74450% 0.255
99.9990000% 0.0010000% 1865 99.81350% 0.18650%
99.9994000% 0.0006000% 99.86510%
4.6 99.9996000% 0.0004000% 99.90330% 0.09670%
4.7 99.9998000% 0.000200 687 99.93130% 0.06870%
4.8 99.9999000% 0.000100 483 99.95170% 0.04830%
4.9 0.96 99.9999040% 0.000096 99.96640% 0.03360%
0.574 99.9999426% 0.000057 232 99.97680% 0.02320%
5.1 99.9999660% 0.000034 159 99.98410% 0.01590%
5.2 99.9999800% 0.000020 99.98930% 0.01070%
5.3 0.116 99.9999884% 0.000011 99.99280% 0.007
99.9999933% 0.000006 99.99520% 0.00480%
0.038 99.9999962% 0.000003 99.99690% 0.00310%
5.6 99.9999979% 0.000002 99.99800% 0.00200%
0.012 99.9999988% 0.000001 13.35 99.99867% 0.00134%
99.9999993% 0.000000 8.55 99.99915% 0.00086%
5.9 99.9999996% 0.0000004% 5.42 99.99946% 0.00054%
99.9999998% 0.0000002% 99.99966% 0.00034%

&”-,Bold”&16&K1F3369DPMO : Sigma Level Table

&”-,Bold”&K1F3369SixSigmaDigest.com

Sample_Size_Calculator

:

Discrete

:

:

271

Sample Size Calculator
Continuous Data Type Discrete
Enter Proportion

Defective 0.50
Acceptable Margin of

Error 0.05
Required Sample Size @ 99% CI 666
Required Sample Size @ 95% CI 385
Required Sample Size @ 90% CI

Z_

Distribution

_Table

Z 0 0.01 0.02 0.03 0.04 0.05 0.06

0.09

0.0

0.1

0.2

0.3

0.4

0.5

595

0.6

629

086

846

627

0.8

0.9

1.0

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2.0

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

3.0

3.1

3.2

3.3

3.4

3.5

0.000200

3.6

3.7

0.000100 0.000096

3.8

0.000057

3.9

0.000034

4.0

4.1

0.000020

0.000017

0.000015

4.2

0.000013

0.000012 0.000011 0.000011

0.000010

0.000009

4.3 0.000009

0.000008

0.000007 0.000007 0.000007 0.000006 0.000006 0.000006

4.4

0.000005 0.000005 0.000005

0.000004 0.000004 0.000004 0.000004 0.000004

4.5 0.000003 0.000003 0.000003 0.000003 0.000003 0.000003 0.000003 0.000002 0.000002 0.000002
4.6 0.000002 0.000002 0.000002 0.000002 0.000002 0.000002 0.000002 0.000002 0.000001 0.000001
4.7 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001
4.8 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001 0.000001
4.9 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
Table of Probabilities for the Standard Normal (Z) Distribution
Right Tailed Distribution
0.07 0.08
0.500000 0.496011 0.492022 0.488034 0.484047 0.480061 0.476078 0.472097 0.468119 0.464144
0.460172 0.456205 0.452242 0.448283 0.444330 0.440382 0.436441 0.432505 0.428576 0.424655
0.420740 0.416834 0.412936 0.409046 0.405165 0.401294 0.397432 0.393580 0.389739 0.385908
0.382089 0.378280 0.374484 0.370700 0.366928 0.363169 0.359424 0.355691 0.351973 0.348268
0.344578 0.340903 0.337243 0.333598 0.329969 0.326355 0.322758 0.319178 0.315614 0.312067
0.308538 0.305026 0.301532 0.298056 0.294599 0.291160 0.287740 0.284339 0.280957 0.277
0.274253 0.270931 0.267 0.264347 0.261 0.257 0.254 0.251429 0.248252 0.245097
0.7 0.241964 0.238852 0.235762 0.232695 0.229650 0.226627 0.223627 0.220650 0.217695 0.214764
0.211855 0.208970 0.206108 0.203269 0.200454 0.197663 0.194895 0.192150 0.189430 0.186733
0.184060 0.181411 0.178786 0.176186 0.173609 0.171056 0.168528 0.166023 0.163543 0.161087
0.158655 0.156248 0.153864 0.151505 0.149170 0.146859 0.144572 0.142310 0.140071 0.137857
0.135666 0.133500 0.131357 0.129238 0.127143 0.125072 0.123024 0.121000 0.119000 0.117023
0.115070 0.113139 0.111232 0.109349 0.107488 0.105650 0.103835 0.102042 0.100273 0.098525
0.096800 0.095098 0.093418 0.091759 0.090123 0.088508 0.086915 0.085343 0.083793 0.082264
0.080757 0.079270 0.077804 0.076359 0.074934 0.073529 0.072145 0.070781 0.069437 0.068112
0.066807 0.065522 0.064255 0.063008 0.061780 0.060571 0.059380 0.058208 0.057053 0.055917
0.054799 0.053699 0.052616 0.051551 0.050503 0.049471 0.048457 0.047460 0.046479 0.045514
0.044565 0.043633 0.042716 0.041815 0.040930 0.040059 0.039204 0.038364 0.037538 0.036727
0.035930 0.035148 0.034380 0.033625 0.032884 0.032157 0.031443 0.030742 0.030054 0.029379
0.028717 0.028067 0.027429 0.026803 0.026190 0.025588 0.024998 0.024419 0.023852 0.023295
0.022750 0.022216 0.021692 0.021178 0.020675 0.020182 0.019699 0.019226 0.018763 0.018309
0.017864 0.017429 0.017003 0.016586 0.016177 0.015778 0.015386 0.015003 0.014629 0.014262
0.013903 0.013553 0.013209 0.012874 0.012545 0.012224 0.011911 0.011604 0.011304 0.011011
0.010724 0.010444 0.010170 0.009903 0.009642 0.009387 0.009137 0.008894 0.008656 0.008424
0.008198 0.007976 0.007760 0.007549 0.007344 0.007143 0.006947 0.006756 0.006569 0.006387
0.006210 0.006037 0.005868 0.005703 0.005543 0.005386 0.005234 0.005085 0.004940 0.004799
0.004661 0.004527 0.004396 0.004269 0.004145 0.004025 0.003907 0.003793 0.003681 0.003573
0.003467 0.003364 0.003264 0.003167 0.003072 0.002980 0.002890 0.002803 0.002718 0.002635
0.002555 0.002477 0.002401 0.002327 0.002256 0.002186 0.002118 0.002052 0.001988 0.001926
0.001866 0.001807 0.001750 0.001695 0.001641 0.001589 0.001538 0.001489 0.001441 0.001395
0.001350 0.001306 0.001264 0.001223 0.001183 0.001144 0.001107 0.001070 0.001035 0.001001
0.000968 0.000935 0.000904 0.000874 0.000845 0.000816 0.000789 0.000762 0.000736 0.000711
0.000687 0.000664 0.000641 0.000619 0.000598 0.000577 0.000557 0.000538 0.000519 0.000501
0.000483 0.000466 0.000450 0.000434 0.000419 0.000404 0.000390 0.000376 0.000362 0.000349
0.000337 0.000325 0.000313 0.000302 0.000291 0.000280 0.000270 0.000260 0.000251 0.000242
0.000233 0.000224 0.000216 0.000208 0.000193 0.000185 0.000178 0.000172 0.000165
0.000159 0.000153 0.000147 0.000142 0.000136 0.000131 0.000126 0.000121 0.000117 0.000112
0.000108 0.000104 0.000092 0.000088 0.000085 0.000082 0.000078 0.000075
0.000072 0.000069 0.000067 0.000064 0.000062 0.000059 0.000054 0.000052 0.000050
0.000048 0.000046 0.000044 0.000042 0.000041 0.000039 0.000037 0.000036 0.000033
0.000032 0.000030 0.000029 0.000028 0.000027 0.000026 0.000025 0.000024 0.000023 0.000022
0.000021 0.000019 0.000018 0.000017 0.000016 0.000015 0.000014
0.000013 0.000012 0.000010 0.000009
0.000008 0.000007
0.000005 0.000004
Standard Normal (Z) Distribution:

t_Distribution_Table

1

2

3 0.277

4

5 0.267

6

7

8

9 0.261

10

11 0.260

12

13 0.259

14

15 0.258

16 0.258

17 0.257

18 0.257 0.534

19 0.257

20 0.257 0.533

21 0.257

22

0.532

23 0.256 0.532 0.858

24 0.256

25 0.256 0.531

26 0.256 0.531 0.856

27 0.256 0.531

28 0.256

0.855

29 0.256 0.530

30 0.256 0.530 0.854

40 0.255

60 0.254

120 0.254

Table of Probabilities for Student’s t-Distribution
df 0.600 0.700 0.800 0.900 0.950 0.975 0.990 0.995
0.325 0.727 1.376 3.078 6.314 12.706 31.821 63.657
0.289 0.617 1.061 1.886 2.920 4.303 6.965 9.925
0.584 0.978 1.638 2.353 3.182 4.541 5.841
0.271 0.569 0.941 1.533 2.132 2.776 3.747 4.604
0.559 0.920 1.476 2.015 2.571 3.365 4.032
0.265 0.553 0.906 1.440 1.943 2.447 3.143 3.707
0.263 0.549 0.896 1.415 1.895 2.365 2.998 3.499
0.262 0.546 0.889 1.397 1.860 2.306 2.896 3.355
0.543 0.883 1.383 1.833 2.262 2.821 3.250
0.260 0.542 0.879 1.372 1.812 2.228 2.764 3.169
0.540 0.876 1.363 1.796 2.201 2.718 3.106
0.259 0.539 0.873 1.356 1.782 2.179 2.681 3.055
0.538 0.870 1.350 1.771 2.160 2.650 3.012
0.258 0.537 0.868 1.345 1.761 2.145 2.624 2.977
0.536 0.866 1.341 1.753 2.131 2.602 2.947
0.535 0.865 1.337 1.746 2.120 2.583 2.921
0.534 0.863 1.333 1.740 2.110 2.567 2.898
0.862 1.330 1.734 2.101 2.552 2.878
0.533 0.861 1.328 1.729 2.093 2.539 2.861
0.860 1.325 1.725 2.086 2.528 2.845
0.532 0.859 1.323 1.721 2.080 2.518 2.831
0.256 0.858 1.321 1.717 2.074 2.508 2.819
1.319 1.714 2.069 2.500 2.807
0.531 0.857 1.318 1.711 2.064 2.492 2.797
0.856 1.316 1.708 2.060 2.485 2.787
1.315 1.706 2.056 2.479 2.779
0.855 1.314 1.703 2.052 2.473 2.771
0.530 1.313 1.701 2.048 2.467 2.763
0.854 1.311 1.699 2.045 2.462 2.756
1.310 1.697 2.042 2.457 2.750
0.529 0.851 1.303 1.684 2.021 2.423 2.704
0.527 0.848 1.296 1.671 2.000 2.390 2.660
0.526 0.845 1.289 1.658 1.980 2.358 2.617
df (degrees of freedom) = number of samples – 1
1 – alpha (for one tail) or 1 – alpha/2 (for two tails)

Definition

s

Definitions
184
Term

Definition

Training Link 1-Sample sign test Tests the probability of sample median being equal to hypothesized value. Accuracy Accuracy refers to the variation between a measurement and what actually exists. It is the difference between an individual’s average measurements and that of a known standard, or accepted “truth.” Alpha risk Alpha risk is defined as the risk of accepting the alternate hypothesis when, in fact, the null hypothesis is true; in other words, stating a difference exists where actually there is none. Alpha risk is stated in terms of probability (such as 0.05 or 5%). The acceptable level of alpha risk is determined by an organization or individual and is based on the nature of the decision being made. For decisions with high consequences (such as those involving risk to human life), an alpha risk of less than 1% would be expected. If the decision involves minimal time or money, an alpha risk of 10% may be appropriate. In general, an alpha risk of 5% is considered the norm in decision making. Sometimes alpha risk is expressed as its inverse, which is confidence level. In other words, an alpha risk of 5% also could be expressed as a 95% confidence level. Alternative hypothesis (Ha) The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not due to chance or sampling error. The alternate hypothesis is the opposite of the null hypothesis (P < 0.05). A dependency exists between two or more factors Analysis of variance (ANOVA) Analysis of variance is a statistical technique for analyzing data that tests for a difference between two or more means. See the tool

1-Way ANOVA

. Anderson-Darling Normality Test P-value < 0.05 = not normal

. Attribute

Data see discrete data Bar chart A bar chart is a graphical comparison of several quantities in which the lengths of the horizontal or vertical bars represent the relative magnitude of the values. Benchmarking Benchmarking is an improvement tool whereby a company measures its performance or process against other companies’ best practices, determines how those companies achieved their performance levels, and uses the information to improve its own performance. See the tool Benchmarking. Beta risk Beta risk is defined as the risk of accepting the null hypothesis when, in fact, the alternate hypothesis is true. In other words, stating no difference exists when there is an actual difference. A statistical test should be capable of detecting differences that are important to you, and beta risk is the probability (such as 0.10 or 10%) that it will not. Beta risk is determined by an organization or individual and is based on the nature of the decision being made. Beta risk depends on the magnitude of the difference between sample means and is managed by increasing test sample size. In general, a beta risk of 10% is considered acceptable in decision making. Bias Bias in a sample is the presence or influence of any factor that causes the population or process being sampled to appear different from what it actually is. Bias is introduced into a sample when data is collected without regard to key factors that may influence the population or process. Blocking Blocking neutralizes background variables that can not be eliminated by randomizing. It does so by spreading them across the experiment Boxplot A box plot, also known as a box and whisker diagram, is a basic graphing tool that displays centering, spread, and distribution of a continuous data set CAP Includes/Excludes CAP Includes/Excludes is a tool that can help your team define the boundaries of your project, facilitate discussion about issues related to your project scope, and challenge you to agree on what is included and excluded within the scope of your work. See the tool CAP Includes/Excludes. CAP Stakeholder Analysis CAP Stakeholder Analysis is a tool to identify and enlist support from stakeholders. It provides a visual means of identifying stakeholder support so that you can develop an action plan for your project. See the tool CAP Stakeholder Analysis.

Capability Analysis Capability analysis is a

Minitab

TM tool that visually compares actual process performance to the performance standards. See the tool Capability Analysis.

Cause

A factor (X) that has an impact on a response variable (Y); a source of variation in a process or product. Cause and Effect Diagram A cause and effect diagram is a visual tool used to logically organize possible causes for a specific problem or effect by graphically displaying them in increasing detail. It helps to identify root causes and ensures common understanding of the causes that lead to the problem. Because of its fishbone shape, it is sometimes called a “fishbone diagram.” See the tool Cause and Effect Diagram. Center The center of a process is the average value of its data. It is equivalent to the mean and is one measure of the central tendency. Center points A center point is a run performed with all factors set halfway between their low and high levels. Each factor must be continuous to have a logical halfway point. For example, there are no logical center points for the factors vendor, machine, or location (such as city); however, there are logical center points for the factors temperature, speed, and length. Central Limit Theorem The central limit theorem states that given a distribution with a mean m and variance s2, the sampling distribution of the mean appraches a normal distribution with a mean and variance/N as N, the sample size, increases Characteristic A characteristic is a definable or measurable feature of a process, product, or variable. Chi Square test A chi square test, also called “test of association,” is a statistical test of association between discrete variables. It is based on a mathematical comparison of the number of observed counts with the number of expected counts to determine if there is a difference in output counts based on the input category. See the tool Chi Square-Test of Independence. Used with Defects data (counts) & defectives data (how many good or bad). Critical Chi-Square is Chi-squared value where p=.05. 3.096 Common cause variability Common cause variability is a source of variation caused by unknown factors that result in a steady but random distribution of output around the average of the data. Common cause variation is a measure of the process’s potential, or how well the process can perform when special cause variation is removed. Therefore, it is a measure of the process technology. Common cause variation is also called random variation, noise, noncontrollable variation, within-group variation, or inherent variation.

Example

: many X’s with a small impact. Step 12 p.103 Confidence band (or interval) Measurement of the certainty of the shape of the fitted regression line. A 95% confidence band implies a 95% chance that the true regression line fits within the confidence bands. Measurement of certainty. Confounding Factors or interactions are said to be confounded when the effect of one factor is combined with that of another. In other words, their effects can not be analyzed independently. Consumers Risk Concluding something is bad when it is actually good (TYPE II Error) Continuous Data Continuous data

is information that can be measured on a continuum or scale. Continuous data can have almost any numeric value and can be meaningfully subdivided into finer and finer increments, depending upon the precision of the measurement system. Examples of continuous data include measurements of time, temperature, weight, and size. For example, time can be measured in days, hours, minutes, seconds, and in even smaller units. Continuous data is also called quantitative data. Control limits Control limits define the area three standard deviations on either side of the centerline, or mean, of data plotted on a control chart. Do not confuse control limits with specification limits. Control limits reflect the expected variation in the data and are based on the distribution of the data points. Minitab™ calculates control limits using collected data. Specification limits are established based on customer or regulatory requirements. Specification limits change only if the customer or regulatory body so requests. Correlation

Correlation is the degree or extent of the relationship between two variables. If the value of one variable increases when the value of the other increases, they are said to be positively correlated. If the value of one variable decreases when the value of the other decreases, they are said to be negatively correlated. The degree of linear association between two variables is quantified by the correlation coefficient Correlation coefficient (r) The correlation coefficient quantifies the degree of linear association between two variables. It is typically denoted by r and will have a value ranging between negative 1 and positive 1. Critical element A critical element is an X that does not necessarily have different levels of a specific scale but can be configured according to a variety of independent alternatives. For example, a critical element may be the routing path for an incoming call or an item request form in an order-taking process. In these cases the critical element must be specified correctly before you can create a viable solution; however, numerous alternatives may be considered as possible solutions. CTQ

CTQs (stands for Critical to Quality) are the key measurable characteristics of a product or process whose performance standards, or specification limits, must be met in order to satisfy the customer. They align improvement or design efforts with critical issues that affect customer satisfaction. CTQs are defined early in any Six Sigma project, based on

Voice of the Customer

(VOC) data. Cycle time Cycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work waits to be processed. Dashboard A dashboard is a tool used for collecting and reporting information about vital customer requirements and your business’s performance for key customers. Dashboards provide a quick summary of process performance. Data

Data is factual information used as a basis for reasoning, discussion, or calculation; often this term refers to quantitative information Defect

A defect is any nonconformity in a product or process; it is any event that does not meet the performance standards of a Y. Defective

The word defective describes an entire unit that fails to meet acceptance criteria, regardless of the number of defects within the unit. A unit may be defective because of one or more defects. Descriptive statistics Descriptive statistics is a method of statistical analysis of numeric data, discrete or continuous, that provides information about centering, spread, and normality. Results of the analysis can be in tabular or graphic format. Design

Risk Assessment A design risk assessment is the act of determining potential risk in a design process, either in a concept design or a detailed design. It provides a broader evaluation of your design beyond just CTQs, and will enable you to eliminate possible failures and reduce the impact of potential failures. This ensures a rigorous, systematic examination in the reliability of the design and allows you to capture system-level risk Detectable Effect Size When you are deciding what factors and interactions you want to get information about, you also need to determine the smallest effect you will consider significant enough to improve your process. This minimum size is known as the detectable effect size, or DES. Large effects are easier to detect than small effects. A design of experiment compares the total variability in the experiment to the variation caused by a factor. The smaller the effect you are interested in, the more runs you will need to overcome the variability in your experimentation. DF (degrees of freedom) Equal to: (#rows – 1)(#cols – 1) Discrete Data Discrete data

is information that can be categorized into a classification. Discrete data is based on counts. Only a finite number of values is possible, and the values cannot be subdivided meaningfully. For example, the number of parts damaged in shipment produces discrete data because parts are either damaged or not damaged. Distribution

Distribution refers to the behavior of a process described by plotting the number of times a variable displays a specific value or range of values rather than by plotting the value itself. DMADV DMADV is GE Company’s data-driven quality strategy for designing products and processes, and it is an integral part of GE’s Six Sigma Quality Initiative. DMADV consists of five interconnected phases: Define, Measure, Analyze, Design, and Verify. DMAIC

DMAIC refers to General Electric’s data-driven quality strategy for improving processes, and is an integral part of the company’s Six Sigma Quality Initiative. DMAIC is an acronym for five interconnected phases: Define, Measure, Analyze, Improve, and Control. DOE

A design of experiment is a structured, organized method for determining the relationship between factors (

Xs

) affecting a process and the output of that process. DPMO

Defects per million opportunities (DPMO) is the number of defects observed during a standard production run divided by the number of opportunities to make a defect during that run, multiplied by one million. DPO Defects per opportunity (DPO) represents total defects divided by total opportunities. DPO is a preliminary calculation to help you calculate DPMO (defects per million opportunities). Multiply DPO by one million to calculate DPMO. DPU Defects per unit (DPU) represents the number of defects divided by the number of products. Dunnett’s(1-way ANOVA): Check to obtain a two-sided confidence interval for the difference between each treatment mean and a control mean. Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. Effect

An effect is that which is produced by a cause; the impact a factor (X) has on a response variable (Y). Entitlement As good as a process can get without capital investment Error

Error, also called residual error, refers to variation in observations made under identical test conditions, or the amount of variation that can not be attributed to the variables included in the experiment. Error (type I) Error that concludes that someone is guilty, when in fact, they really are not. (Ho true, but I rejected it–concluded Ha) ALPHA Error (type II) Error that concludes that someone is not guilty, when in fact, they really are. (Ha true, but I concluded Ho). BETA Factor

A factor is an independent variable; an X. Failure Mode and Effect Analysis Failure mode and effects analysis (FMEA) is a disciplined approach used to identify possible failures of a product or service and then determine the frequency and impact of the failure. See the tool

Failure Mode and Effects Analysis

. Fisher’s (1-way ANOVA): Check to obtain confidence intervals for all pairwise differences between level means using Fisher’s LSD procedure. Specify an individual rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. Fits Predicted values of “Y” calculated using the regression equation for each value of “X” Fitted value A fitted value is the Y output value that is predicted by a regression equation. Fractional factorial DOE A fractional factorial design of experiment (DOE) includes selected combinations of factors and levels. It is a carefully prescribed and representative subset of a full factorial design. A fractional factorial DOE is useful when the number of potential factors is relatively large because they reduce the total number of runs required. By reducing the number of runs, a fractional factorial DOE will not be able to evaluate the impact of some of the factors independently. In general, higher-order interactions are confounded with main effects or lower-order interactions. Because higher order interactions are rare, usually you can assume that their effect is minimal and that the observed effect is caused by the main effect or lower-level interaction. C:\Six Sigma\CD Training\04B_analysis_010199.pps – 7 Frequency plot A frequency plot is a graphical display of how often data values occur. Full factorial DOE A full factorial design of experiment (DOE) measures the response of every possible combination of factors and factor levels. These responses are analyzed to provide information about every main effect and every interaction effect. A full factorial DOE is practical when fewer than five factors are being investigated. Testing all combinations of factor levels becomes too expensive and time-consuming with five or more factors. F-value (ANOVA) Measurement of distance between individual distributions. As F goes up, P goes down (i.e., more confidence in there being a difference between two means). To calculate: (

Mean

Square of X / Mean Square of Error) Gage R&R

Gage R&R, which stands for gage repeatability and reproducibility, is a statistical tool that measures the amount of variation in the measurement system arising from the measurement device and the people taking the measurement. See Gage R&R tools. Gannt Chart A Gantt chart is a visual project planning device used for production scheduling. A Gantt chart graphically displays time needed to complete tasks. Goodman-Kruskal Gamma Term used to describe % variation explained by X GRPI GRPI stands for four critical and interrelated aspects of teamwork: goals, roles, processes, and interpersonal relationships, and it is a tool used to assess them. See the tool GRPI. Histogram A histogram is a basic graphing tool that displays the relative frequency or occurrence of continuous data values showing which values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indicates whether there are any outliers. See the tool Histogram. Homegeneity of variance Homogeneity of variance is a test used to determine if the variances of two or more samples are different. See the tool

Homogeneity of Variance

. Hypothesis testing Hypothesis testing refers to the process of using statistical analysis to determine if the observed differences between two or more samples are due to random chance (as stated in the null hypothesis) or to true differences in the samples (as stated in the alternate hypothesis). A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. The alternate hypothesis (Ha) is a statement that the observed difference or relationship between two populations is real and not the result of chance or an error in sampling. Hypothesis testing is the process of using a variety of statistical tools to analyze data and, ultimately, to accept or reject the null hypothesis. From a practical point of view, finding statistical evidence that the null hypothesis is false allows you to reject the null hypothesis and accept the alternate hypothesis. I-MR Chart An I-MR chart, or individual and moving range chart, is a graphical tool that displays process variation over time. It signals when a process may be going out of control and shows where to look for sources of special cause variation. See the tool I-MR Control. In control In control refers to a process unaffected by special causes. A process that is in control is affected only by common causes. A process that is out of control is affected by special causes in addition to the common causes affecting the mean and/or variance of a process. Independent variable An independent variable is an input or process variable (X) that can be set directly to achieve a desired output Intangible benefits Intangible benefits, also called soft benefits, are the gains attributable to your improvement project that are not reportable for formal accounting purposes. These benefits are not included in the financial calculations because they are nonmonetary or are difficult to attribute directly to quality. Examples of intangible benefits include cost avoidance, customer satisfaction and retention, and increased employee morale. Interaction An interaction occurs when the response achieved by one factor depends on the level of the other factor. On interaction plot, when lines are not parallel, there’s an interaction. Interrelationship digraph An interrelationship digraph is a visual display that maps out the cause and effect links among complex, multivariable problems or desired outcomes. IQR Intraquartile range (from box plot) representing range between 25th and 75th quartile. Kano Analysis Kano analysis is a quality measurement used to prioritize customer requirements. Kruskal-Wallis Kruskal-Wallis performs a hypothesis test of the equality of population medians for a one-way design (two or more populations). This test is a generalization of the procedure used by the

Mann-Whitney

test and, like Mood’s median test, offers a nonparametric alternative to the one-way analysis of variance. The Kruskal-Wallis test looks for differences among the populations medians. The Kruskal-Wallis test is more powerful (the confidence interval is narrower, on average) than Mood’s median test for analyzing data from many distributions, including data from the normal distribution, but is less robust against outliers. Kurtosis Kurtosis is a measure of how peaked or flat a curve’s distribution is. L1

Spread

sheet An L1 spreadsheet calculates defects per million opportunities (DPMO) and a process Z value for discrete data. L2 Spreadsheet An L2 spreadsheet calculates the short-term and long-term Z values for continuous data sets. Leptokurtic Distribution A leptokurtic distribution is symmetrical in shape, similar to a normal distribution, but the center peak is much higher; that is, there is a higher frequency of values near the mean. In addition, a leptokurtic distribution has a higher frequency of data in the tail area. Levels Levels are the different settings a factor can have. For example, if you are trying to determine how the response (speed of data transmittal) is affected by the factor (connection type), you would need to set the factor at different levels (modem and LAN) then measure the change in response. Linearity Linearity is the variation between a known standard, or “truth,” across the low and high end of the gage. It is the difference between an individual’s measurements and that of a known standard or truth over the full range of expected values. LSL

A lower specification limit is a value above which performance of a product or process is acceptable. This is also known as a lower spec limit or LSL. Lurking variable A lurking variable is an unknown, uncontrolled variable that influences the output of an experiment. Main Effect A main effect is a measurement of the average change in the output when a factor is changed from its low level to its high level. It is calculated as the average output when a factor is at its high level minus the average output when the factor is at its low level. C:\Six Sigma\CD Training\04A_efficient_022499.pps – 13 Mallows

Statistic

(C-p) Statistic within Regression–>Best Fits which is used as a measure of bias (i.e., when predicted is different than truth). Should equal (#vars + 1) Mann-Whitney

Mann-Whitney performs a hypothesis test of the equality of two population medians and calculates the corresponding point estimate and confidence interval. Use this test as a nonparametric alternative to the two-sample t-test. Mean

The mean is the average data point value within a data set. To calculate the mean, add all of the individual data points then divide that figure by the total number of data points. Measurement system analysis Measurement system analysis is a mathematical method of determining how much the variation within the measurement process contributes to overall process variability. Median

The median is the middle point of a data set; 50% of the values are below this point, and 50% are above this point. Mode

The most often occurring value in the data set Moods Median Mood’s median test can be used to test the equality of medians from two or more populations and, like the

Kruskal-Wallis Test

, provides an nonparametric alternative to the one-way analysis of variance. Mood’s median test is sometimes called a median test or sign scores test. Mood’s Median Test tests:
H0: the population medians are all equal versus H1: the medians are not all equal
An assumption of Mood’s median test is that the data from each population are independent random samples and the population distributions have the same shape. Mood’s median test is robust against outliers and errors in data and is particularly appropriate in the preliminary stages of analysis. Mood’s Median test is more robust than is the Kruskal-Wallis test against outliers, but is less powerful for data from many distributions, including the normal. Multicolinearity Multicolinearity is the degree of correlation between Xs. It is an important consideration when using multiple regression on data that has been collected without the aid of a design of experiment (DOE). A high degree of multicolinearity may lead to regression coefficients that are too large or are headed in the wrong direction from that you had expected based on your knowledge of the process. High correlations between Xs also may result in a large

p-value

for an X that changes when the intercorrelated X is dropped from the equation. The variance inflation factor provides a measure of the degree of multicolinearity. Multiple regression Multiple regression is a method of determining the relationship between a continuous process output (Y) and several factors (Xs). Multi-vari chart A multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time

. See the tool

Multi-Vari Chart

. Noise Process input that consistently causes variation in the output measurement that is random and expected and, therefore, not controlled is called noise. Noise also is referred to as white noise, random variation, common cause variation, noncontrollable variation, and within-group variation. Nominal It refers to the value that you estimate in a design process that approximate your real CTQ (Y) target value based on the design element capacity. Nominals are usually referred to as point estimate and related to y-hat model. Non-parametric Set of tools that avoids assuming a particular distribution. Normal Distribution Normal distribution is the spread of information (such as product performance or demographics) where the most frequently occurring value is in the middle of the range and other probabilities tail off symmetrically in both directions. Normal distribution is graphically categorized by a bell-shaped curve, also known as a Gaussian distribution. For normally distributed data, the mean and median are very close and may be identical. Normal probability Used to check whether observations follow a normal distribution. P > 0.05 = data is normal Normality test

A normality test is a statistical process used to determine if a sample or any group of data fits a standard normal distribution. A normality test can be performed mathematically or graphically. See the tool Normality Test. Null Hypothesis (Ho) A null hypothesis (H0) is a stated assumption that there is no difference in parameters (mean, variance, DPMO) for two or more populations. According to the null hypothesis, any observed difference in samples is due to chance or sampling error. It is written mathematically as follows: H0: m1 = m2 H0: s1 = s2. Defines what you expect to observe. (e.g., all means are same or independent). (P > 0.05) Opportunity An opportunity is anything that you inspect, measure, or test on a unit that provides a chance of allowing a defect. Outlier An outlier is a data point that is located far from the rest of the data. Given a mean and standard deviation, a statistical distribution expects data points to fall within a specific range. Those that do not are called outliers and should be investigated to ensure that the data is correct. If the data is correct, you have witnessed a rare event or your process has changed. In either case, you need to understand what caused the outliers to occur. Percent of tolerance Percent of tolerance is calculated by taking the measurement error of interest, such as repeatability and/or reproducibility, dividing by the total tolerance range, then multiplying the result by 100 to express the result as a percentage. Platykurtic Distribution A platykurtic distribution is one in which most of the values share about the same frequency of occurrence. As a result, the curve is very flat, or plateau-like. Uniform distributions are platykurtic. Pooled Standard Deviation Pooled standard deviation is the standard deviation remaining after removing the effect of special cause variation-such as geographic location or time of year. It is the average variation of your subgroups. Prediction Band (or interval) Measurement of the certainty of the scatter about a certain regression line. A 95% prediction band indicates that, in general, 95% of the points will be contained within the bands. Probability Probability refers to the chance of something happening, or the fraction of occurrences over a large number of trials. Probability can range from 0 (no chance) to 1 (full certainty). Probability of Defect Probability of defect is the statistical chance that a product or process will not meet performance specifications or lie within the defined upper and lower specification limits. It is the ratio of expected defects to the total output and is expressed as p(d). Process capability can be determined from the probability of defect. Process Capability

Process capability refers to the ability of a process to produce a defect-free product or service. Various indicators are used-some address overall performance, some address potential performance. Producers Risk Concluding something is good when it is actually bad (TYPE I Error) p-value

The p-value represents the probability of concluding (incorrectly) that there is a difference in your samples when no true difference exists. It is a statistic calculated by comparing the distribution of given sample data and an expected distribution (normal, F, t, etc.) and is dependent upon the statistical test being performed. For example, if two samples are being compared in a t-test, a p-value of 0.05 means that there is only 5% chance of arriving at the calculated t value if the samples were not different (from the same population). In other words, a p-value of 0.05 means there is only a 5% chance that you would be wrong in concluding the populations are different. P-value < 0.05 = safe to conclude there's a difference. P-value = risk of wasting time investigating further. Q1

25th percentile (from box plot) Q3

75th percentile (from box plot) Qualitative data

Discrete data
Quality Function Deployment Quality function deployment (QFD) is a structured methodology used to identify customers’ requirements and translate them into key process deliverables. In Six Sigma, QFD helps you focus on ways to improve your process or product to meet customers’ expectations. See the tool Quality Function Deployment. Quantitative data

Continuous data
Radar Chart A radar chart is a graphical display of the differences between actual and ideal performance. It is useful for defining performance and identifying strengths and weaknesses. Randomization Running experiments in a random order, not the standard order in the test layout. Helps to eliminate effect of “lurking variables”, uncontrolled factors whihc might vary over the length of the experiment. Rational Subgroup A rational subgroup is a subset of data defined by a specific factor such as a stratifying factor or a time period. Rational subgrouping identifies and separates special cause variation (variation between subgroups caused by specific, identifiable factors) from common cause variation (unexplained, random variation caused by factors that cannot be pinpointed or controlled). A rational subgroup should exhibit only common cause variation. Regression analysis Regression analysis is a method of analysis that enables you to quantify the relationship between two or more variables (X) and (Y) by fitting a line or plane through all the points such that they are evenly distributed about the line or plane. Visually, the best-fit line is represented on a

scatter plot

by a line or plane. Mathematically, the line or plane is represented by a formula that is referred to as the regression equation. The regression equation is used to model process performance (Y) based on a given value or values of the process variable (X). Repeatability Repeatability is the variation in measurements obtained when one person takes multiple measurements using the same techniques on the same parts or items. Replicates Number of times you ran each corner. Ex. 2 replicates means you ran one corner twice. Replication Replication occurs when an experimental treatment is set up and conducted more than once. If you collect two data points at each treatment, you have two replications. In general, plan on making between two and five replications for each treatment. Replicating an experiment allows you to estimate the residual or experimental error. This is the variation from sources other than the changes in factor levels. A replication is not two measurements of the same data point but a measurement of two data points under the same treatment conditions. For example, to make a replication, you would not have two persons time the response of a call from the northeast region during the night shift. Instead, you would time two calls into the northeast region’s help desk during the night shift. Reproducibility Reproducibility is the variation in average measurements obtained when two or more people measure the same parts or items using the same measuring technique. Residual A residual is the difference between the actual Y output value and the Y output value predicted by the regression equation. The residuals in a regression model can be analyzed to reveal inadequacies in the model. Also called “errors” Resolution Resolution is a measure of the degree of confounding among effects. Roman numerals are used to denote resolution. The resolution of your design defines the amount of information that can be provided by the design of experiment. As with a computer screen, the higher the resolution of your design, the more detailed the information you will see. The lowest resolution you can have is resolution III. Robust Process A robust process is one that is operating at 6 sigma and is therefore resistant to defects. Robust processes exhibit very good short-term process capability (high short-term Z values) and a small

Z shift

value. In a robust process, the critical elements usually have been designed to prevent or eliminate opportunities for defects; this effort ensures sustainability of the process. Continual monitoring of robust processes is not usually needed, although you may wish to set up periodic audits as a safeguard. Rolled Throughput Yield Rolled throughput yield is the probability that a single unit can pass through a series of process steps free of defects. R-squared A mathematical term describing how much variation is being explained by the X. FORMULA: R-sq = SS(regression) / SS(total) R-Squared Answers question of how much of total variation is explained by X. Caution: R-sq increases as number of data points increases. Pg. 13 analyze R-squared (adj) Unlike R-squared, R-squared adjusted takes into account the number of X’s and the number of data points. FORMULA: R-sq (adj) = 1 – [(SS(regression)/DF(regression)) / (SS(total)/DF(total))] R-Squared adjusted Takes into account the number of X’s and the number of data points…also answers: how much of total variation is explained by X. Sample

A portion or subset of units taken from the population whose characteristics are actually measured Sample Size Calc. The sample size calculator is a spreadsheet tool used to determine the number of data points, or sample size, needed to estimate the properties of a population. See the tool Sample Size Calculator. Sampling Sampling is the practice of gathering a subset of the total data available from a process or a population. scatter plot

A scatter plot, also called a scatter diagram or a scattergram, is a basic graphic tool that illustrates the relationship between two variables. The dots on the scatter plot represent data points. See the tool

Scatter Plot

. Scorecard

A scorecard is an evaluation device, usually in the form of a questionnaire, that specifies the criteria your customers will use to rate your business’s performance in satisfying their requirements. Screening DOE A screening design of experiment (DOE) is a specific type of a fractional factorial DOE. A screening design is a resolution III design, which minimizes the number of runs required in an experiment. A screening DOE is practical when you can assume that all interactions are negligible compared to main effects. Use a screening DOE when your experiment contains five or more factors. Once you have screened out the unimportant factors, you may want to perform a fractional or full-fractional DOE. Segmentation Segmentation is a process used to divide a large group into smaller, logical categories for analysis. Some commonly segmented entities are customers, data sets, or markets. S-hat Model It describes the relationship between output variance and input nominals Sigma

The Greek letter s (sigma) refers to the standard deviation of a population. Sigma, or standard deviation, is used as a scaling factor to convert upper and lower specification limits to Z. Therefore, a process with three standard deviations between its mean and a spec limit would have a Z value of 3 and commonly would be referred to as a 3 sigma process. Simple Linear Regression

Simple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation such as Y = b + mX SIPOC

SIPOC stands for suppliers, inputs, process, output, and customers. You obtain inputs from suppliers, add value through your process, and provide an output that meets or exceeds your customer’s requirements. Skewness Most often, the median is used as a measure of central tendency when data sets are skewed. The metric that indicates the degree of asymmetry is called, simply, skewness. Skewness often results in situations when a natural boundary is present. Normal distributions will have a skewness value of approximately zero. Right-skewed distributions will have a positive skewness value; left-skewed distributions will have a negative skewness value. Typically, the skewness value will range from negative 3 to positive 3. Two examples of skewed data sets are salaries within an organization and monthly prices of homes for sale in a particular area. Span A measure of variation for “S-shaped” fulfillment Y’s Special cause variability Unlike common cause variability, special cause variation is caused by known factors that result in a non-random distribution of output. Also referred to as “exceptional” or “assignable” variation. Example: Few X’s with big impact.

Step 12 p.103
Spread

The spread of a process represents how far data points are distributed away from the mean, or center. Standard deviation is a measure of spread. SS Process Report The Six Sigma process report is a Minitab™ tool that calculates process capability and provides visuals of process performance. See the tool

Six Sigma Process Report

. SS Product Report The Six Sigma product report is a Minitab™ tool that calculates the DPMO and short-term capability of your process. See the tool

Six Sigma Product Report

. Stability Stability represents variation due to elapsed time. It is the difference between an individual’s measurements taken of the same parts after an extended period of time using the same techniques. Standard Deviation (s) Standard deviation is a measure of the spread of data in relation to the mean. It is the most common measure of the variability of a set of data. If the standard deviation is based on a sampling, it is referred to as “s.” If the entire data population is used, standard deviation is represented by the Greek letter sigma (s). The standard deviation (together with the mean) is used to measure the degree to which the product or process falls within specifications. The lower the standard deviation, the more likely the product or service falls within spec. When the standard deviation is calculated in relation to the mean of all the data points, the result is an overall standard deviation. When the standard deviation is calculated in relation to the means of subgroups, the result is a pooled standard deviation. Together with the mean, both overall and pooled standard deviations can help you determine your degree of control over the product or process. Standard Order Design of experiment (DOE) treatments often are presented in a standard order. In a standard order, the first factor alternates between the low and high setting for each treatment. The second factor alternates between low and high settings every two treatments. The third factor alternates between low and high settings every four treatments. Note that each time a factor is added, the design doubles in size to provide all combinations for each level of the new factor. Statistic

Any number calculated from sample data, describes a sample characteristic Statistical Process Control (SPC) Statistical process control is the application of statistical methods to analyze and control the variation of a process. Stratification A stratifying factor, also referred to as stratification or a stratifier, is a factor that can be used to separate data into subgroups. This is done to investigate whether that factor is a significant special cause factor. Subgrouping Measurement of where you can get. Tolerance Range Tolerance range is the difference between the upper specification limit and the lower specification limit. Total Observed Variation Total observed variation is the combined variation from all sources, including the process and the measurement system. Total Prob of Defect The total probability of defect is equal to the sum of the probability of defect above the upper spec limit-p(d), upper-and the probability of defect below the lower spec limit-p(d), lower. Transfer function A transfer function describes the relationship between lower level requirements and higher level requirements. If it describes the relationship between the nominal values, then it is called a y-hat model. If it describes the relationship between the variations, then it is called an s-hat model. Transformations Used to make non-normal data look more normal. GEAE CD (Control) Trivial many The trivial many refers to the variables that are least likely responsible for variation in a process, product, or service. T-test A t-test is a statistical tool used to determine whether a significant difference exists between the means of two distributions or the mean of one distribution and a target value. See the t-test tools. Tukey’s (1-wayANOVA): Check to obtain confidence intervals for all pairwise differences between level means using Tukey’s method (also called Tukey’s HSD or Tukey-Kramer method). Specify a family error rate between 0.5 and 0.001. Values greater than or equal to 1.0 are interpreted as percentages. The default error rate is 0.05. Unexplained Variation (S) Regression statistical output that shows the unexplained variation in the data. Se = sqrt((sum(yi-y_bar)^2)/(n-1)) Unit A unit is any item that is produced or processed. USL

An upper specification limit, also known as an upper spec limit, or USL, is a value below which performance of a product or process is acceptable. Variation

Variation is the fluctuation in process output. It is quantified by standard deviation, a measure of the average spread of the data around the mean. Variation is sometimes called noise. Variance is squared standard deviation. Variation (common cause) Common cause variation is fluctuation caused by unknown factors resulting in a steady but random distribution of output around the average of the data. It is a measure of the process potential, or how well the process can perform when special cause variation is removed; therefore, it is a measure of the process’s technology. Also called, inherent variation Variation (special cause) Special cause variation is a shift in output caused by a specific factor such as environmental conditions or process input parameters. It can be accounted for directly and potentially removed and is a measure of process control, or how well the process is performing compared to its potential. Also called non-random variation. Whisker From box plot…displays minimum and maximum observations within 1.5 IQR (75th-25th percentile span) from either 25th or 75th percentile. Outlier are those that fall outside of the 1.5 range. Yield

Yield is the percentage of a process that is free of defects. Z

A Z value is a data point’s position between the mean and another location as measured by the number of standard deviations. Z is a universal measurement because it can be applied to any unit of measure. Z is a measure of process capability and corresponds to the process sigma value that is reported by the businesses. For example, a 3 sigma process means that three standard deviations lie between the mean and the nearest specification limit. Three is the Z value. Z bench Z bench is the Z value that corresponds to the total probability of a defect Z lt Z long term (ZLT) is the Z bench calculated from the overall standard deviation and the average output of the current process. Used with continuous data, ZLT represents the overall process capability and can be used to determine the probability of making out-of-spec parts within the current process. Z shift

Z shift is the difference between ZST and ZLT. The larger the Z shift, the more you are able to improve the control of the special factors identified in the subgroups. Z st ZST represents the process capability when special factors are removed and the process is properly centered. ZST is the metric by which processes are compared.

184

../../../../../../Six%20Sigma/CD%20Training/04A_efficient_022499.pps

../../../../../../Six%20Sigma/CD%20Training/04B_analysis_010199.pps

Tool to Use

Tool

Data Type

1

1-Way ANOVA

0

Continuous X & Y

0

At least one group of data is different than at least one other group. 0

Benchmarking

all

1

Continuous X & Y N/A 0

Binary Logistic Regression

0

Continuous X & Y N/A 1

Continuous X & Y N/A 1

all N/A 0

N/A 0

CAP Includes/Excludes

Define all N/A 0

CAP Stakeholder Analysis

all N/A 0

Capability Analysis

Continuous X & Y N/A 1

Cause and Effect Diagram

all N/A 0

0

Control Charts

all N/A 0

Data Collection Plan

all N/A 0

Continuous X & Y N/A 0

Continuous Y & all X’s N/A 0

all N/A 0

discrete (category or count) N/A 0

(Process ModelTM)

Continuous Y, Discrete Xs N/A 0

Quick graphical comparison of two or more processes’ variation or spread

Continuous Y, Discrete Xs N/A

Failure Mode and Effects Analysis

all N/A 0

Continuous X & Y 0

Continuous X & Y 0

GRPI

all N/A 0

Histogram

Continuous Y & all X’s N/A 1

Homogeneity of Variance

Continuous Y, Discrete Xs

1

I-MR Chart

The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control

Continuous X & Y N/A 1

Kano Analysis

all N/A 0

Kruskal-Wallis Test

0

Continuous Y & all X’s N/A

all N/A 0

Continuous Y & all X’s N/A 0

all N/A 0

Continuous X & Y

0

Multi-Vari Chart A multi-vari chart is a tool that graphically displays patterns of variation. It is used to identify possible Xs or families of variation, such as variation within a subgroup, between subgroups, or over time

Continuous Y & all X’s N/A 0

ows you to determine the normality of your data.

1

Normality Test

cont (measurement) not normal 0

Defectives Y / Continuous & Discrete X N/A 1

all N/A 0

p Chart

Defectives Y / Continuous & Discrete X N/A 1

Chart

all N/A 0

all N/A 0

Pugh Matrix

all N/A 0

Quality Function Deployment

all N/A 0

Continuous X & Y A correlation is detected 0

Risk Assessment

all N/A 0

Continuous X & Y N/A 0

cont (measurement) N/A 1

Sample Size Calculator

all N/A 1

Scatter Plot

all N/A 0

Simple Linear Regression

Continuous X & Y

0

Simulation

all N/A 0

Six Sigma Process Report

Continuous Y & all X’s N/A 0

Six Sigma Product Report

It helps you compare the performance of your process or product to the performance standard and determine if technology or control is the problem

Continuous Y, Discrete Xs N/A 0

Continuous X & Y N/A 0

all N/A 0

N/A 1

Voice of the Customer

all N/A 0

all N/A 0

The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control

Continuous X & Y N/A 1

Continuous X & Y N/A 1

What does it do? Why use? When use? P < .05 indicates Picture
1-Sample t-Test Compares mean to target The 1-sample t-test is useful in identifying a significant difference between a sample mean and a specified value when the difference is not readily apparent from graphical tools. Using the 1-sample t-test to compare data gathered before process improvements and after is a way to prove that the mean has actually shifted. The 1-sample t-test is used with continuous data any time you need to compare a sample mean to a specified value. This is useful when you need to make judgments about a process based on a sample output from that process. Continuous X & Y Not equal
ANOVA tests to see if the difference between the means of each level is significantly more than the variation within each level. 1-way ANOVA is used when two or more means (a single factor with three or more levels) must be compared with each other. One-way ANOVA is useful for identifying a statistically significant difference between means of three or more levels of a factor. Use 1-way ANOVA when you need to compare three or more means (a single factor with three or more levels) and determine how much of the total observed variation can be explained by the factor. Continuous Y, Discrete Xs At least one group of data is different than at least one other group.
2-Sample t-Test A statistical test used to detect differences between means of two populations. The 2-sample t-test is useful for identifying a significant difference between means of two levels (subgroups) of a factor. It is also extremely useful for identifying important Xs for a project Y. When you have two samples of continuous data, and you need to know if they both come from the same population or if they represent two different populations There is a difference in the means
ANOVA GLM ANOVA General Linear Model (GLM) is a statistical tool used to test for differences in means. ANOVA tests to see if the difference between the means of each level is significantly more than the variation within each level. ANOVA GLM is used to test the effect of two or more factors with multiple levels, alone and in combination, on a dependent variable. The General Linear Model allows you to learn one form of ANOVA that can be used for all tests of mean differences involving two or more factors or levels. Because ANOVA GLM is useful for identifying the effect of two or more factors (independent variables) on a dependent variable, it is also extremely useful for identifying important Xs for a project Y. ANOVA GLM also yields a percent contribution that quantifies the variation in the response (dependent variable) due to the individual factors and combinations of factors. You can use ANOVA GLM any time you need to identify a statistically significant difference in the mean of the dependent variable due to two or more factors with multiple levels, alone and in combination. ANOVA GLM also can be used to quantify the amount of variation in the response that can be attributed to a specific factor in a designed experiment. Continuous Y & all X’s
Benchmarking is an improvement tool whereby a company: Measures its performance or process against other companies’ best in class practices, Determines how those companies achieved their performance levels, Uses the information to improve its own performance. Benchmarking is an important tool in the improvement of your process for several reasons. First, it allows you to compare your relative position for this product or service against industry leaders or other companies outside your industry who perform similar functions. Second, it helps you identify potential Xs by comparing your process to the benchmarked process. Third, it may encourage innovative or direct applications of solutions from other businesses to your product or process. And finally, benchmarking can help to build acceptance for your project’s results when they are compared to benchmark data obtained from industry leaders. Benchmarking can be done at any point in the Six Sigma process when you need to develop a new process or improve an existing one N/A
Best Subsets Tells you the best X to use when you’re comparing multiple X’s in regression assessment. Best Subsets is an efficient way to select a group of “best subsets” for further analysis by selecting the smallest subset that fulfills certain statistical criteria. The subset model may actually estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors Typically used before or after a multiple-regression analysis. Particularly useful in determining which X combination yields the best R-sq value.
Binary logistic regression is useful in two important applications: analyzing the differences among discrete Xs and modeling the relationship between a discrete binary Y and discrete and/or continuous Xs. Binary logistic regression is useful in two applications: analyzing the differences among discrete Xs and modeling the relationship between a discrete binary Y and discrete and/or continuous Xs. Binary logistic regression can be used to model the relationship between a discrete binary Y and discrete and/or continuous Xs. The predicted values will be probabilities p(d) of an event such as success or failure-not an event count. The predicted values will be bounded between zero and one (because they are probabilities). Generally speaking, logistic regression is used when the Ys are discrete and the Xs are continuous Defectives Y / Continuous & Discrete X The goodness-of-fit tests, with p-values ranging from 0.312 to 0.724, indicate that there is insufficient evidence for the model not fitting the data adequately. If the p-value is less than your accepted a level, the test would indicate sufficient evidence for a conclusion of an inadequate fit.
Box Plot A box plot is a basic graphing tool that displays the centering, spread, and distribution of a continuous data set. In simplified terms, it is made up of a box and whiskers (and occasional outliers) that correspond to each fourth, or quartile, of the data set. The box represents the second and third quartiles of data. The line that bisects the box is the median of the entire data set-50% of the data points fall below this line and 50% fall above it. The first and fourth quartiles are represented by “whiskers,” or lines that extend from both ends of the box. a box plot can help you visualize the centering, spread, and distribution of your data quickly. It is especially useful to view more than one box plot simultaneously to compare the performance of several processes such as the price quote cycle between offices or the accuracy of component placement across several production lines. A box plot can help identify candidates for the causes behind your list of potential Xs. It also is useful in tracking process improvement by comparing successive plots generated over time You can use a box plot throughout an improvement project, although it is most useful in the Analyze phase. In the Measure phase you can use a box plot to begin to understand the nature of a problem. In the Analyze phase a box plot can help you identify potential Xs that should be investigated further. It also can help eliminate potential Xs. In the Improve phase you can use a box plot to validate potential improvements
Box-Cox Transformation used to find the mathematical function needed to translate a continuous but nonnormal distribution into a normal distribution. After you have entered your data, Minitab tells you what mathematical function can be applied to each of your data points to bring your data closer to a normal distribution. Many tools require that data be normally distributed to produce accurate results. If the data set is not normal, this may reduce significantly the confidence in the results obtained. If your data is not normally distributed, you may encounter problems in Calculating Z values with continuous data. You could calculate an inaccurate representation of your process capability. In constructing control charts…. Your process may appear more or less in control than it really is. In Hypothesis testing… As your data becomes less normal, the results of your tests may not be valid.
Brainstorming Brainstorming is a tool that allows for open and creative thinking. It encourages all team members to participate and to build on each other’s creativity Brainstorming is helpful because it allows your team to generate many ideas on a topic creatively and efficiently without criticism or judgment. Brainstorming can be used any time you and your team need to creatively generate numerous ideas on any topic. You will use brainstorming many times throughout your project whenever you feel it is appropriate. You also may incorporate brainstorming into other tools, such as QFD, tree diagrams, process mapping, or FMEA.
c Chart a graphical tool that allows you to view the actual number of defects in each subgroup. Unlike continuous data control charts, discrete data control charts can monitor many product quality characteristics simultaneously. For example, you could use a c chart to monitor many types of defects in a call center process (like hang ups, incorrect information given, disconnections) on a single chart when the subgroup size is constant. The c chart is a tool that will help you determine if your process is in control by determining whether special causes are present.

The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control Control phase to verify that your process remains in control after the sources of special cause variation have been removed. The c chart is used for processes that generate discrete data. The c chart monitors the number of defects per sample taken from a process. You should record between 5 and 10 readings, and the sample size must be constant. The c chart can be used in both low- and high- volume environments Continuous X, Attribute Y
A group exercise used to establish scope and facilitate discussion. Effort focuses on delineating project boundaries. Encourages group participation. Increases individual involvement and understanding of team efforts. Prevents errant team efforts in later project stages (waste). Helps to orient new team members.
Confirms management or stakeholder acceptance and prioritization of Project and team efforts. Helps to eliminate low priority projects. Insure management support and compatibility with business goals. Defone
Capability analysis is a MinitabTM tool that visually compares actual process performance to the performance standards. The capability analysis output includes an illustration of the data and several performance statistics. The plot is a histogram with the performance standards for the process expressed as upper and lower specification limits (USL and LSL). A normal distribution curve is calculated from the process mean and standard deviation; this curve is overlaid on the histogram. Beneath this graphic is a table listing several key process parameters such as mean, standard deviation, capability indexes, and parts per million (ppm) above and below the specification limits. When describing a process, it is important to identify sources of variation as well as process segments that do not meet performance standards. Capability analysis is a useful tool because it illustrates the centering and spread of your data in relation to the performance standards and provides a statistical summary of process performance. Capability analysis will help you describe the problem and evaluate the proposed solution in statistical terms. Capability analysis is used with continuous data whenever you need to compare actual process performance to the performance standards. You can use this tool in the Measure phase to describe process performance in statistical terms. In the Improve phase, you can use capability analysis when you optimize and confirm your proposed solution. In the Control phase, capability analysis will help you compare the actual improvement of your process to the performance standards.
A cause and effect diagram is a visual tool that logically organizes possible causes for a specific problem or effect by graphically displaying them in increasing detail. It is sometimes called a fishbone diagram because of its fishbone shape. This shape allows the team to see how each cause relates to the effect. It then allows you to determine a classification related to the impact and ease of addressing each cause A cause and effect diagram allows your team to explore, identify, and display all of the possible causes related to a specific problem. The diagram can increase in detail as necessary to identify the true root cause of the problem. Proper use of the tool helps the team organize thinking so that all the possible causes of the problem, not just those from one person’s viewpoint, are captured. Therefore, the cause and effect diagram reflects the perspective of the team as a whole and helps foster consensus in the results because each team member can view all the inputs You can use the cause and effect diagram whenever you need to break an effect down into its root causes. It is especially useful in the Measure, Analyze, and Improve phases of the DMAIC process
Chi Square–Test of Independence The chi square-test of independence is a test of association (nonindependence) between discrete variables. It is also referred to as the test of association. It is based on a mathematical comparison of the number of observed counts against the expected number of counts to determine if there is a difference in output counts based on the input category. Example: The number of units failing inspection on the first shift is greater than the number of units failing inspection on the second shift. Example: There are fewer defects on the revised application form than there were on the previous application form The chi square-test of independence is useful for identifying a significant difference between count data for two or more levels of a discrete variable Many statistical problem statements and performance improvement goals are written in terms of reducing DPMO/DPU. The chi square-test of independence applied to before and after data is a way to prove that the DPMO/DPU have actually been reduced. When you have discrete Y and X data (nominal data in a table-of-total-counts format, shown in fig. 1) and need to know if the Y output counts differ for two or more subgroup categories (Xs), use the chi square test. If you have raw data (untotaled), you need to form the contingency table. Use Stat > Tables > Cross Tabulation and check the Chisquare analysis box. discrete (category or count) At least one group is statistically different.
Control charts are time-ordered graphical displays of data that plot process variation over time. Control charts are the major tools used to monitor processes to ensure they remain stable. Control charts are characterized by A centerline, which represents the process average, or the middle point about which plotted measures are expected to vary randomly. Upper and lower control limits, which define the area three standard deviations on either side of the centerline. Control limits reflect the expected range of variation for that process. Control charts determine whether a process is in control or out of control. A process is said to be in control when only common causes of variation are present. This is represented on the control chart by data points fluctuating randomly within the control limits. Data points outside the control limits and those displaying nonrandom patterns indicate special cause variation. When special cause variation is present, the process is said to be out of control. Control charts identify when special cause is acting on the process but do not identify what the special cause is. There are two categories of control charts, characterized by type of data you are working with: continuous data control charts and discrete data control charts. Control charts serve as a tool for the ongoing control of a process and provide a common language for discussing process performance. They help you understand variation and use that knowledge to control and improve your process. In addition, control charts function as a monitoring system that alerts you to the need to respond to special cause variation so you can put in place an immediate remedy to contain any damage. In the Measure phase, use control charts to understand the performance of your process as it exists before process improvements. In the Analyze phase, control charts serve as a troubleshooting guide that can help you identify sources of variation (Xs). In the Control phase, use control charts to : 1. Make sure the vital few Xs remain in control to sustain the solution – 2. Show process performance after full-scale implementation of your solution. You can compare the control chart created in the Control phase with that from the Measure phase to show process improvement -3. Verify that the process remains in control after the sources of special cause variation have been removed
Failing to establish a data collection plan can be an expensive mistake in a project. Without a plan, data collection may be haphazard, resulting in insufficient, unnecessary, or inaccurate information. This is often called “bad” data. A data collection plan provides a basic strategy for collecting accurate data efficiently Any time data is needed, you should draft a data collection plan before beginning to collect it.
Design Analysis Spreadsheet The design analysis spreadsheet is an MS-Excel™ workbook that has been designed to perform partial derivative analysis and root sum of squares analysis. The design analysis spreadsheet provides a quick way to predict the mean and standard deviation of an output measure (Y), given the means and standard deviations of the inputs (Xs). This will help you develop a statistical model of your product or process, which in turn will help you improve that product or process. The partial derivative of Y with respect to X is called the sensitivity of Y with respect to X or the sensitivity coefficient of X. For this reason, partial derivative analysis is sometimes called sensitivity analysis. The design analysis spreadsheet can help you improve, revise, and optimize your design. It can also:Improve a product or process by identifying the Xs which have the most impact on the response.Identify the factors whose variability has the highest influence on the response and target their improvement by adjusting tolerances.Identify the factors that have low influence and can be allowed to vary over a wider range.Be used with the Solver** optimization routine for complex functions (Y equations) with many constraints. ** Note that you must unprotect the worksheet before using Solver.Be used with process simulation to visualize the response given a set of constrained Partial derivative analysis is widely used in product design, manufacturing, process improvement, and commercial services during the concept design, capability assessment, and creation of the detailed design.When the Xs are known to be highly non-normal (and especially if the Xs have skewed distributions), Monte Carlo analysis may be a better choice than partial derivative analysis.Unlike root sum of squares (RSS) analysis, partial derivative analysis can be used with nonlinear transfer functions.Use partial derivative analysis when you want to predict the mean and standard deviation of a system response (Y), given the means and standard deviations of the inputs (Xs), when the transfer function Y=f(X1, X2, ., Xn) is known. However, the inputs (Xs) must be independent of one another (i.e., not correlated).
Design of Experiment (DOE) Design of experiment (DOE) is a tool that allows you to obtain information about how factors (Xs), alone and in combination, affect a process and its output (Y). Traditional experiments generate data by changing one factor at a time, usually by trial and error. This approach often requires a great many runs and cannot capture the effect of combined factors on the output. By allowing you to test more than one factor at a time-as well as different settings for each factor-DOE is able to identify all factors and combinations of factors that affect the process Y. DOE uses an efficient, cost-effective, and methodical approach to collecting and analyzing data related to a process output and the factors that affect it. By testing more than one factor at a time, DOE is able to identify all factors and combinations of factors that affect the process Y In general, use DOE when you want toIdentify and quantify the impact of the vital few Xs on your process outputDescribe the relationship between Xs and a Y with a mathematical modelDetermine the best configuration
Design Scorecards Design scorecards are a means for gathering data, predicting final quality, analyzing drivers of poor quality, and modifying design elements before a product is built. This makes proactive corrective action possible, rather than initiating reactive quality efforts during pre-production. Design scorecards are an MS-Excel™ workbook that has been designed to automatically calculate Z values for a product based on user-provided inputs of for all the sub-processes and parts that make up the product. Design scorecards have six basic components: 1 Top-level scorecard-used to report the rolled-up ZST prediction 2. Performance worksheet-used to estimate defects caused by lack of design margin 3. Process worksheet-used to estimate defects in process as a result of the design configuration 4.Parts worksheet-used to estimate defects due to incoming materialsSoftware worksheet-used to estimate defects in software 5. Software worksheet-used to estimate defects in software 6. Reliability worksheet-used to estimate defects due to reliability Design scorecards can be used anytime that a product or process is being designed or modified and it is necessary to predict defect levels before implementing a process. They can be used in either the DMADV or DMAIC processes.
Discrete Data Analysis Method The Discrete Data Analysis (DDA) method is a tool used to assess the variation in a measurement system due to reproducibility, repeatability, and/or accuracy. This tool applies to discrete data only. The DDA method is an important tool because it provides a method to independently assess the most common types of measurement variation-repeatability, reproducibility, and/or accuracy. Completing the DDA method will help you to determine whether the variation from repeatability, reproducibility, and/or accuracy in your measurement system is an acceptably small portion of the total observed variation. Use the DDA method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the DDA method when you have discrete data and you want to determine if the measurement variation due to repeatability, reproducibility, and/or accuracy is an acceptably small portion of the total observed variation
Discrete Event

Simulation Discrete event simulation is conducted for processes that are dictated by events at distinct points in time; each occurrence of an event impacts the current state of the process. Examples of discrete events are arrivals of phone calls at a call center. Timing in a discrete event model increases incrementally based on the arrival and departure of the inputs or resources ProcessModelTM is a process modeling and analysis tool that accelerates the process improvement effort. It combines a simple flowcharting function with a simulation process to produce a quick and easy tool for documenting, analyzing, and improving business processes. Discrete event simulation is used in the Analyze phase of a DMAIC project to understand the behavior of important process variables. In the Improve phase of a DMAIC project, discrete event simulation is used to predict the performance of an existing process under different conditions and to test new process ideas or alternatives in an isolated environment. Use ProcessModelTM when you reach step 4, Implement, of the 10-step simulation process.
Dot Plot Quick graphical comparison of two or more processes’ variation or spread Comparing two or more processes’ variation or spread
A means / method to Identify ways a process can fail, estimate th risks of those failures, evaluate a control plan, prioritize actions related to the process Complex or new processes. Customers are involved.
Gage R & R–ANOVA Method Gage R&R-ANOVA method is a tool used to assess the variation in a measurement system due to reproducibility and/or repeatability. An advantage of this tool is that it can separate the individual effects of repeatability and reproducibility and then break down reproducibility into the components “operator” and “operator by part.”  This tool applies to continuous data only. Gage R&R-ANOVA method is an important tool because it provides a method to independently assess the most common types of measurement variation – repeatability and reproducibility. This tool will help you to determine whether the variation from repeatability and/or reproducibility in your measurement system is an acceptably small portion of the total observed variation. Measure -Use Gage R&R-ANOVA method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the ANOVA method when you have continuous data and you want to determine if the measurement variation due to repeatability and/or reproducibility is an acceptably small portion of the total observed variation.
Gage R & R–Short Method Gage R&R-Short Method is a tool used to assess the variation in a measurement system due to the combined effect of reproducibility and repeatability. An advantage of this tool is that it requires only two operators and five samples to complete the analysis. A disadvantage of this tool is that the individual effects of repeatability and reproducibility cannot be separated. This tool applies to continuous data only Gage R&R-Short Method is an important tool because it provides a quick method of assessing the most common types of measurement variation using only five parts and two operators. Completing the Gage R&R-Short Method will help you determine whether the combined variation from repeatability and reproducibility in your measurement system is an acceptably small portion of the total observed variation. Use Gage R&R-Short Method after the project data collection plan is formulated or modified and before the project data collection plan is finalized and data is collected. Choose the Gage R&R-Short Method when you have continuous data and you believe the total measurement variation due to repeatability and reproducibility is an acceptably small portion of the total observed variation, but you need to confirm this belief. For example, you may want to verify that no changes occurred since a previous Gage R&R study. Gage R&R-Short Method can also be used in cases where sample size is limited.
GRPI is an excellent tool for organizing newly formed teams. It is valuable in helping a group of individuals work as an effective team-one of the key ingredients to success in a DMAIC project GRPI is an excellent team-building tool and, as such, should be initiated at one of the first team meetings. In the DMAIC process, this generally happens in the Define phase, where you create your charter and form your team. Continue to update your GRPI checklist throughout the DMAIC process as your project unfolds and as your team develops
A histogram is a basic graphing tool that displays the relative frequency or occurrence of data values-or which data values occur most and least frequently. A histogram illustrates the shape, centering, and spread of data distribution and indicates whether there are any outliers. The frequency of occurrence is displayed on the y-axis, where the height of each bar indicates the number of occurrences for that interval (or class) of data, such as 1 to 3 days, 4 to 6 days, and so on. Classes of data are displayed on the x-axis. The grouping of data into classes is the distinguishing feature of a histogram it is important to identify and control all sources of variation. Histograms allow you to visualize large quantities of data that would otherwise be difficult to interpret. They give you a way to quickly assess the distribution of your data and the variation that exists in your process. The shape of a histogram offers clues that can lead you to possible Xs. For example, when a histogram has two distinct peaks, or is bimodal, you would look for a cause for the difference in peaks. Histograms can be used throughout an improvement project. In the Measure phase, you can use histograms to begin to understand the statistical nature of the problem. In the Analyze phase, histograms can help you identify potential Xs that should be investigated further. They can also help eliminate potential Xs. In the Improve phase, you can use histograms to characterize and confirm your solution. In the Control phase, histograms give you a visual reference to help track and maintain your improvements.
Homogeneity of variance is a test used to determine if the variances of two or more samples are different, or not homogeneous. The homogeneity of variance test is a comparison of the variances (sigma, or standard deviations) of two or more distributions. While large differences in variance between a small number of samples are detectable with graphical tools, the homogeneity of variance test is a quick way to reliably detect small differences in variance between large numbers of samples. There are two main reasons for using the homogeneity of variance test:1. A basic assumption of many statistical tests is that the variances of the different samples are equal. Some statistical procedures, such as 2-sample t-test, gain additional test power if the variances of the two samples can be considered equal.2. Many statistical problem statements and performance improvement goals are written in terms of “reducing the variance.” Homogeneity of variance tests can be performed on before and after data, as a way to prove that the variance has been reduced. (Use Levene’s Test) At least one group of data is different than at least one other group
The I-MR chart is a tool to help you determine if your process is in control by seeing if special causes are present. The Measure phase to separate common causes of variation from special causesThe Analyze and Improve phases to ensure process stability before completing a hypothesis testThe Control phase to verify that the process remains in control after the sources of special cause variation have been removed
Kano analysis is a customer research method for classifying customer needs into four categories; it relies on a questionnaire filled out by or with the customer. It helps you understand the relationship between the fulfillment or nonfulfillment of a need and the satisfaction or dissatisfaction experienced by the customer. The four categories are 1. delighters, 2. Must Be elements, 3. One – dimensionals, & 4. Indeifferent elements. There are two additional categories into which customer responses to the Kano survey can fall: they are reverse elements and questionable result. –The categories in Kano analysis represent a point in time, and needs are constantly evolving. Often what is a delighter today can become simply a must-be over time. Kano analysis provides a systematic, data-based method for gaining deeper understanding of customer needs by classifying them Use Kano analysis after a list of potential needs that have to be satisfied is generated (through, for example, interviews, focus groups, or observations).  Kano analysis is useful when you need to collect data on customer needs and prioritize them to focus your efforts.
Compare two or more means with unknown distributions non-parametric (measurement or count) At least one mean is different
Matrix Plot Tool used for high-level look at relationships between several parameters. Matrix plots are often a first step at determining which X’s contribute most to your Y. Matrix plots can save time by allowing you to drill-down into data and determine which parameters best relate to your Y. You should use matrix plots early in your analyze phase.
Mistake Proofing Mistake-proofing devices prevent defects by preventing errors or by predicting when errors could occur. Mistake proofing is an important tool because it allows you to take a proactive approach to eliminating errors at their source before they become defects. You should use mistake proofing in the Measure phase when you are developing your data collection plan, in the Improve phase when you are developing your proposed solution, and in the Control phase when developing the control plan.Mistake proofing is appropriate when there are :1. Process steps where human intervention is required2. Repetitive tasks where physical manipulation of objects is required3. Steps where errors are known to occur4. Opportunities for predictable errors to occur
Monte Carlo Analysis Monte Carlo analysis is a decision-making and problem-solving tool used to evaluate a large number of possible scenarios of a process. Each scenario represents one possible set of values for each of the variables of the process and the calculation of those variables using the transfer function to produce an outcome Y. By repeating this method many times, you can develop a distribution for the overall process performance. Monte Carlo can be used in such broad areas as finance, commercial quality, engineering design, manufacturing, and process design and improvement. Monte Carlo can be used with any type of distribution; its value comes from the increased knowledge we gain in terms of variation of the output Performing a Monte Carlo analysis is one way to understand the variation that naturally exists in your process. One of the ways to reduce defects is to decrease the output variation. Monte Carlo focuses on understanding what variations exist in the input Xs in order to reduce the variation in output Y.
Multi-Generational Product/Process Planning Multigenerational product/process planning (MGPP) is a procedure that helps you create, upgrade, leverage, and maintain a product or process in a way that can reduce production costs and increase market share. A key element of MGPP is its ability to help you follow up product/process introduction with improved, derivative versions of the original product. Most products or processes, once introduced, tend to remain unchanged for many years. Yet, competitors, technology, and the marketplace-as personified by the ever more demanding consumer-change constantly. Therefore, it makes good business sense to incorporate into product/process design a method for anticipating and taking advantage of these changes. You should follow an MGPP in conjunction with your business’s overall marketing strategy. The market process applied to MGPP usually takes place over three or more generations. These generations cover the first three to five years of product/process development and introduction.
Multiple Regression method that enables you to determine the relationship between a continuous process output (Y) and several factors (Xs). Multiple regression will help you to understand the relationship between the process output (Y) and several factors (Xs) that may affect the Y. Understanding this relationship allows you to1. Identify important Xs2. Identify the amount of variation explained by the model3. Reduce the number of Xs prior to design of experiment (DOE )4. Predict Y based on combinations of X values5. Identify possible nonlinear relationships such as a quadratic (X12) or an interaction (X1X2)The output of a multiple regression analysis may demonstrate the need for designed experiments that establish a cause and effect relationship or identify ways to further improve the process. You can use multiple regression during the Analyze phase to help identify important Xs and during the Improve phase to define the optimized solution. Multiple regression can be used with both continuous and discrete Xs. If you have only discrete Xs, use ANOVA-GLM. Typically you would use multiple regression on existing data. If you need to collect new data, it may be more efficient to use a DOE. A correlation is detected
A multi-vari chart enables you to see the effect multiple variables have on a Y. It also helps you see variation within subgroups, between subgroups, and over time. By looking at the patterns of variation, you can identify or eliminate possible Xs
Normal Probability Plot All To determine the normality of data. To see if multiple X’s exist in your data. cont (measurement) Data does not follow a normal distribution
A normality test is a statistical process used to determine if a sample, or any group of data, fits a standard normal distribution. A normality test can be done mathematically or graphically. Many statistical tests (tests of means and tests of variances) assume that the data being tested is normally distributed. A normality test is used to determine if that assumption is valid. There are two occasions when you should use a normality test:
1. When you are first trying to characterize raw data, normality testing is used in conjunction with graphical tools such as histograms and box plots.
2. When you are analyzing your data, and you need to calculate basic statistics such as Z values or employ statistical tests that assume normality, such as t-test and ANOVA.
n

p Chart a graphical tool that allows you to view the actual number of defectives and detect the presence of special causes. The np chart is a tool that will help you determine if your process is in control by seeing if special causes are present. The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control. You will use an np chart in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. The np chart is used for processes that generate discrete data. The np chart is used to graph the actual number of defectives in a sample. The sample size for the np chart is constant, with between 5 and 10 defectives per sample on the average.
Out-of-the-Box Thinking Out-of-the-box thinking is an approach to creativity based on overcoming the subconscious patterns of thinking that we all develop. Many businesses are successful for a brief time due to a single innovation, while continued success is dependent upon continued innovation Root cause analysis and new product / process development
a graphical tool that allows you to view the proportion of defectives and detect the presence of special causes. The p chart is used to understand the ratio of nonconforming units to the total number of units in a sample. The p chart is a tool that will help you determine if your process is in control by determining whether special causes are present. The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control You will use a p chart in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. The p chart is used for processes that generate discrete data. The sample size for the p chart can vary but usually consists of 100 or more
Pareto A Pareto chart is a graphing tool that prioritizes a list of variables or factors based on impact or frequency of occurrence. This chart is based on the Pareto principle, which states that typically 80% of the defects in a process or product are caused by only 20% of the possible causes . It is easy to interpret, which makes it a convenient communication tool for use by individuals not familiar with the project. The Pareto chart will not detect small differences between categories; more advanced statistical tools are required in such cases. In the Define phase to stratify Voice of the Customer data…In the Measure phase to stratify data collected on the project Y…..In the Analyze phase to assess the relative impact or frequency of different factors, or Xs
Process Mapping Process mapping is a tool that provides structure for defining a process in a simplified, visual manner by displaying the steps, events, and operations (in chronological order) that make up a process As you examine your process in greater detail, your map will evolve from the process you “think” exists to what “actually” exists. Your process map will evolve again to reflect what “should” exist-the process after improvements are made. In the Define phase, you create a high-level process map to get an overview of the steps, events, and operations that make up the process. This will help you understand the process and verify the scope you defined in your charter. It is particularly important that your high-level map reflects the process as it actually is, since it serves as the basis for more detailed maps.In the Measure and Analyze phases, you create a detailed process map to help you identify problems in the process. Your improvement project will focus on addressing these problems.In the Improve phase, you can use process mapping to develop solutions by creating maps of how the process “should be.”
the tool used to facilitate a disciplined, team-based process for concept selection and generation. Several concepts are evaluated according to their strengths and weaknesses against a reference concept called the datum. The datum is the best current concept at each iteration of the matrix. The Pugh matrix encourages comparison of several different concepts against a base concept, creating stronger concepts and eliminating weaker ones until an optimal concept finally is reached provides an objective process for reviewing, assessing, and enhancing design concepts the team has generated with reference to the project’s CTQs. Because it employs agreed-upon criteria for assessing each concept, it becomes difficult for one team member to promote his or her own concept for irrational reasons. The Pugh matrix is the recommended method for selecting the most promising concepts in the Analyze phase of the DMADV process. It is used when the team already has developed several alternative concepts that potentially can meet the CTQs developed during the Measure phase and must choose the one or two concepts that will best meet the performance requirements for further development in the Design phase
a methodology that provides a flowdown process for CTQs from the highest to the lowest level. The flowdown process begins with the results of the customer needs mapping (VOC) as input. From that point we cascade through a series of four Houses of Quality to arrive at the internal controllable factors. QFD is a prioritization tool used to show the relative importance of factors rather than as a transfer function. QFD drives a cross-functional discussion to define what is important. It provides a vehicle for asking how products/services will be measured and what are the critical variables to control processes.The QFD process highlights trade-offs between conflicting properties and forces the team to consider each trade off in light of the customer’s requirements for the product/service.Also, it points out areas for improvement by giving special attention to the most important customer wants and systematically flowing them down through the QFD process. QFD produces the greatest results in situations where1. Customer requirements have not been clearly defined 2. There must be trade-offs between the elements of the business 3. There are significant investments in resources required
Reqression see Multiple Regression
The risk-management process is a methodology used to identify risks,analyze risks,plan, communicate, and implement abatement actions, andtrack resolution of abatement actions. Any time you make a change in a process, there is potential for unforeseen failure or unintended consequences. Performing a risk assessment allows you to identify potential risks associated with planned process changes and develop abatement actions to minimize the probability of their occurrence. The risk-assessment process also determines the ownership and completion date for each abatement action. In DMAIC, risk assessment is used in the Improve phase before you make changes in the process (before running a DOE, piloting, or testing solutions) and in the Control phase to develop the control plan. In DMADV, risk assessment is used in all phases of design, especially in the Analyze and Verify phases where you analyze and verify your concept design.
Root Sum of Squares Root sum of squares (RSS) is a statistical tolerance analysis method used to estimate the variation of a system output Y from variations in each of the system’s inputs Xs. RSS analysis is a quick method for estimating the variation in system output given the variation in system component inputs, provided the system behavior can be modeled using a linear transfer function with unit (± 1) coefficients. RSS can quickly tell you the probability that the output (Y) will be outside its upper or lower specification limits. Based on this information, you can decide whether some or all of your inputs need to be modified to meet the specifications on system output, and/or if the specifications on system output need to be changed. Use RSS when you need to quantify the variation in the output given the variation in inputs. However, the following conditions must be met in order to perform RSS analysis: 1. The inputs (Xs) are independent. 2. The transfer function is linear with coefficients of +1 and/or – 1. 3. In addition, you will need to know (or have estimates of) the means and standard deviations of each X.
Run Chart A run chart is a graphical tool that allows you to view the variation of your process over time. The patterns in the run chart can help identify the presence of special cause variation. The patterns in the run chart allow you to see if special causes are influencing your process. This will help you to identify Xs affecting your process run chart. used in many phases of the DMAIC process. Consider using a run chart to 1. Look for possible time-related Xs in the Measure phase 2. Ensure process stability before completing a hypothesis test 3. Look at variation within a subgroup; compare subgroup to subgroup variation
The sample size calculator simplifies the use of the sample size formula and provides you with a statistical basis for determining the required sample size for given levels of a and b risks The calculation helps link allowable risk with cost. If your sample size is statistically sound, you can have more confidence in your data and greater assurance that resources spent on data collection efforts and/or planned improvements will not be wasted
a basic graphic tool that illustrates the relationship between two variables.The variables may be a process output (Y) and a factor affecting it (X), two factors affecting a Y (two Xs), or two related process outputs (two Ys). Useful in determining whether trends exist between two or more sets of data. Scatter plots are used with continuous and discrete data and are especially useful in the Measure, Analyze, and Improve phases of DMAIC projects.
Simple linear regression is a method that enables you to determine the relationship between a continuous process output (Y) and one factor (X). The relationship is typically expressed in terms of a mathematical equation, such as Y = b + mX, where Y is the process output, b is a constant, m is a coefficient, and X is the process input or factor Simple linear regression will help you to understand the relationship between the process output (Y) and any factor that may affect it (X). Understanding this relationship will allow you to predict the Y, given a value of X. This is especially useful when the Y variable of interest is difficult or expensive to measure You can use simple linear regression during the Analyze phase to help identify important Xs and during the Improve phase to define the settings needed to achieve the desired output. indicate that there is sufficient evidence that the coefficients are not zero for likely Type I error rates (a levels)… SEE MINITAB
Simulation is a powerful analysis tool used to experiment with a detailed process model to determine how the process output Y will respond to changes in its structure, inputs, or surroundings Xs. Simulation model is a computer model that describes relationships and interactions among inputs and process activities. It is used to evaluate process output under a range of different conditions. Different process situations need different types of simulation models. Discrete event simulation is conducted for processes that are dictated by events at distinct points in time; each occurrence of an event impacts the current state of the process. ProcessModel is GE Company’s standard software tool for running discrete event models.Continuous simulation is used for processes whose variables or parameters do not experience distinct start and end points. CrystalBall is GE’s standard software tool for running continuous models Simulation can help you: 1. Identify interactions and specific problems in an existing or proposed process 2. Develop a realistic model for a process 3. Predict the behavior of the process under different conditions 4. Optimize process performance Simulation is used in the Analyze phase of a DMAIC project to understand the behavior of important process variables. In the Improve phase of a DMAIC project, simulation is used to predict the performance of an existing process under different conditions and to test new process ideas or alternatives in an isolated environment
A Six Sigma process report is a MinitabÔ tool that provides a baseline for measuring improvement of your product or process It helps you compare the performance of your process or product to the performance standard and determine if technology or control is the problem A Six Sigma process report, used with continuous data, helps you determine process capability for your project Y. Process capability is calculated after you have gathered your data and have determined your performance standards
calculates DPMO and process short term capability used with discrete data, helps you determine process capability for your project Y. You would calculate Process capability after you have gathered your data and determined your performance standards.
Stepwise Regression Regression tool that filters out unwanted X’s based on specified criteria.
Tree Diagram A tree diagram is a tool that is used to break any concept (such as a goal, idea, objective, issue, or CTQ) into subcomponents, or lower levels of detail. Useful in organizing information into logical categories. See “When use?” section for more detail A tree diagram is helpful when you want to 1. Relate a CTQ to subprocess elements (Project CTQs) 2. Determine the project Y (Project Y) 3. Select the appropriate Xs (Prioritized List of All Xs) 4. Determine task-level detail for a solution to be implemented (Optimized Solution)
u Chart A u chart, shown in figure 1, is a graphical tool that allows you to view the number of defects per unit sampled and detect the presence of special causes The u chart is a tool that will help you determine if your process is in control by determining whether special causes are present. The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring your process into control You will use a u chart in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. The u chart is used for processes that generate discrete data. The u chart monitors the number of defects per unit taken from a process. You should record between 20 and 30 readings, and the sample size may be variable.
The following tools are commonly used to collect VOC data: Dashboard ,Focus group, Interview, Scorecard, and Survey.. Tools used to develop specific CTQs and associated priorities. Each VOC tool provides the team with an organized method for gathering information from customers. Without the use of structured tools, the data collected may be incomplete or biased. Key groups may be inadvertently omitted from the process, information may not be gathered to the required level of detail, or the VOC data collection effort may be biased because of your viewpoint. You can use VOC tools at the start of a project to determine what key issues are important to the customers, understand why they are important, and subsequently gather detailed information about each issue. VOC tools can also be used whenever you need additional customer input such as ideas and suggestions for improvement or feedback on new solutions
Worst Case Analysis A worst case analysis is a nonstatistical tolerance analysis tool used to identify whether combinations of inputs (Xs) at their upper and lower specification limits always produce an acceptable output measure (Y). Worst case analysis tells you the minimum and maximum limits within which your total product or process will vary. You can then compare these limits with the required specification limits to see if they are acceptable. By testing these limits in advance, you can modify any incorrect tolerance settings before actually beginning production of the product or process. You should use worst case analysis : To analyze safety-critical Ys, and when no process data is available and only the tolerances on Xs are known. Worst case analysis should be used sparingly because it does not take into account the probabilistic nature (that is, the likelihood of variance from the specified values) of the inputs.
Xbar-R Chart The Xbar-R chart is a tool to help you decide if your process is in control by determining whether special causes are present. Xbar-R charts can be used in many phases of the DMAIC process when you have continuous data broken into subgroups. Consider using an Xbar-R chart· in the Measure phase to separate common causes of variation from special causes,· in the Analyze and Improve phases to ensure process stability before completing a hypothesis test, or· in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed.
Xbar-S Chart An Xbar-S chart, or mean and standard deviation chart, is a graphical tool that allows you to view the variation in your process over time. An Xbar-S chart lets you perform statistical tests that signal when a process may be going out of control. A process that is out of control has been affected by special causes as well as common causes. The chart can also show you where to look for sources of special cause variation. The X portion of the chart contains the mean of the subgroups distributed over time. The S portion of the chart represents the standard deviation of data points in a subgroup The Xbar-S chart is a tool to help you determine if your process is in control by seeing if special causes are present. The presence of special cause variation indicates that factors are influencing the output of your process. Eliminating the influence of these factors will improve the performance of your process and bring it into control An Xbar-S chart can be used in many phases of the DMAIC process when you have continuous data. Consider using an Xbar-S chart……in the Measure phase to separate common causes of variation from special causes, in the Analyze and Improve phases to ensure process stability before completing a hypothesis test, or in the Control phase to verify that the process remains in control after the sources of special cause variation have been removed. NOTE – Use Xbar-R if the sample size is small.

Return Home

TopicContent.asp?tpcid=6045&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6072&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6073&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6093&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6054&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6051&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6058&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6132&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6089&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6135&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6134&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6047&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6133&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6090&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6131&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6061&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6204&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6119&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6205&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6080&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6147&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6055&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6046&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6078&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6079&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6074&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6065&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6048&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6207&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6121&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6153&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6208&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6068&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6114&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6122&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6052&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6210&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6059&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6063&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6148&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6056&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6123&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6214&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6067&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6060&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6091&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6064&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6049&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6125&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6070&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6071&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6062&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6075&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6215&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6115&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6066&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6116&hin=19&whatpage=Topic&ns=new

TopicContent.asp?tpcid=6203&hin=19&whatpage=Topic&ns=new

Minitab

Tool

Example

Y Xs

ANOVA

Variable Attribute At least one group of data is different than at least one other group.

Response data must be stacked in one column and the individual points must be tagged (numerically) in another column. Variable Attribute N/A

All All N/A

Chi-Square

Discrete Discrete At least one group is statistically different.

Dot Plot Quick graphical comparison of two or more processes’ variation or spread

Variable Attribute N/A

Variable

At least one group of data is different than at least one other group.

Histogram

Variable Attribute N/A

Homogeneity of Variance

Response data must be stacked in one column and the individual points must be tagged (numerically) in another column. Variable Attribute (Use Levene’s Test) At least one group of data is different than at least one other group

Kruskal-Wallis Test

Response data must be stacked in one column and the individual points must be tagged (numerically) in another column. Variable Attribute At least one mean is different

Variable Attribute N/A

Response data must be stacked in one column and the individual points must be tagged (numerically) in another column. Variable Attribute N/A

Input one column of data Variable N/A Not equal

Pareto

Variable Attribute N/A

Process Mapping

N/A

N/A N/A N/A

Regression

Input two columns of equal length Variable Variable A correlation is detected

Variable N/A N/A

Scatter Plot

Variable Variable N/A

Input two columns of equal length Variable Variable There is a difference in the means

Use When Minitab Format Data Format p < 0.05 indicates
Determine if the average of a group of data is different than the average of other (multiple) groups of data Compare multiple fixtures to determine if one or more performs differently Stat ANOVA Oneway Response data must be stacked in one column and the individual points must be tagged (numerically) in another column.
Box & Whisker Plot Compare median and variation between groups of data. Also identifies outliers. Compare turbine blade weights using different scales. Graph Boxplot
Cause & Effect Diagram/ Fishbone Brainstorming possible sources of variation for a particular effect Potential sources of variation in gage r&r Stat Quality Tools Cause and Effect Input ideas in proper column heading for main branches of fishbone. Type effect in pulldown window.
Determine if one set of defectives data is different than other sets of defectives data. Compare DPUs between GE90 and CF6 Stat Tables Chi-square Test Input two columns; one column containing the number of non-defective, and the other containing the number of defective.
Compare length of service of GE90 technicians to CF6 technicians Graph Character Graphs Dotplot Input multiple columns of data of equal length
General Linear Models Determine if difference in categorical data between groups is real when taking into account other variable x’s Determine if height and weight are significant variables between two groups when looking at pay Stat ANOVA General Linear Model Response data must be stacked in one column and the individual points must be tagged (numerically) in another column. Other variables must be stacked in separate columns. Attribute/ Variable
View the distribution of data (spread, mean, mode, outliers, etc.) View the distribution of Y Graph Histogram or Stat Quality Tools Process Capability Input one column of data
Determine if the variation in one group of data is different than the variation in other (multiple) groups of data Compare the variation between teams Stat ANOVA Homogeneity of Variance
Determine if the means of non-normal data are different Compare the means of cycle time for different delivery methods Stat Nonparametrics Kruskal-Wallis
Multi Vari Analysis (See also Run Chart / Time Series Plot) Helps identify most important types or families of variation Compare within piece, piece to piece or time to time making of airfoils leading edge thickness Graph Interval Plot Response data must be stacked in one column and the individual points must be tagged (numerically) in another column in time order.
Notched Box Plot Compare median of a given confidence interval and variation between groups of data Compare different hole drilling patterns to see if the median and spread of the diameters are the same Graph Character Graphs Boxplot
One-sample t-test Determine if average of a group of data is statistically equal to a specific target Manufacturer claims the average number of cookies in a 1 lb. package is 250. You sample 10 packages and find that the average is 235. Use this test to disprove the manufacturer’s claim. Stat Basic Statistics 1 Sample t
Compare how frequently different causes occur Determine which defect occurs the most often for a particular engine program Stat Quality Tools Pareto Chart Input two columns of equal length
Create visual aide of each step in the process being evaluated Map engine horizontal area with all rework loops and inspection points Use rectangles for process steps and diamonds for decision points
Determine if a group of data incrementally changes with another group Determine if a runout changes with temperature Stat Regression Regression
Run Chart/Time Series Plot Look for trends, outliers, oscillations, etc. View runout values over time Stat Quality Tools Run Chart or Graph Time Series Plot Input one column of data. Must also input a subgroup size (1 will show all points)
Look for correlations between groups of variable data Determine if rotor blade length varies with home position Graph Plot or Graph Marginal Plot or Graph Matrix Plot (multiples) Input two or more groups of data of equal length
Two-sample t-test Determine if the average of one group of data is greater than (or less than) the average of another group of data Determine if the average radius produced by one grinder is different than the average radius produced by another grinder Stat Basic Statistics 2 Sample t

FIshbone

Project Name

Effect
(Y)

Management

Man
Method
Measurement

Machine

Material

Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause
Cause

Capability summary

Project Name

Responsible Date

Target

Cp

Sample Size Date

Capability Summary
Process or Product Name Prepared by
Customer Requirement (Output Variable) Measurement Technique %R&R or P/T Ratio Upper Specification Lower Specification Cpk Actions
Key Process Output Variable

Process Control Plan

Process Control Plan
Process or Product Name Prepared by
Responsible Date
Process

Variable

Specification/ Requirement

Sample Size Frequency

Target USL

Sub Process Step Specification Characteristic Measurement Method Control Method Decision Rule/ Corrective Action
KPOV
basis: Key Process output variable
KPIV
basis: Key Process Input Variable
LSL

Data Collection Plan Template

DATA

COLLECTION PLAN TEMPLATE

PROJECT NAME DATE PREPARED BY ID PERFORMANCE MEASURE OPERATIONAL DEFINITION DATA SOURCE & LOCATION SAMPLE SIZE WHO WILL COLLECT DATA? WHEN WILL DATA BE COLLECTED? HOW WILL DATA BE COLLECTED? HOW WILL DATA BE USED? ADDITIONAL DATA TO BE COLLECTED AT SAME TIME #
1
2
3
4
5
6
7
8
9
10
11
12
LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT

https://goo.gl/qdc7cy

Tree Diagram Template

LEARN MORE ABOUT SMARTSHEET FOR PROJECT MANAGEMENT
TREE DIAGRAM TEMPLATE
OBJECTIVE / PRIMARY MEANS / SECONDARY MEANS / TERTIARY MEANS / FOURTH LEVEL /
VISION LONG-TERM SHORT-TERM MEASURES TARGETS

DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA

https://goo.gl/PpiO3g

CTQ (Critical to Quality) Tree

Definition/Purpose: Translates the voice of the customer’s (VOC) language into a measurable specification so you can tell whether or not the CTQ has been met. Used in Define phase.

Instructions:
To use as a template, please save a copy by clicking on the save icon.
Use the blank tree diagram to translate a customer need from your project to a CTQ requirement. For each need, determine what that would mean to the customer. The answer becomes a driver toward the CTQ. Keep asking the same question – ‘what would that mean’ – until you reach a point where it would be absurd to continue. That is the CTQ.

Example:

· “Good service” means “knowledgeable representatives”

· “Knowledgeable representatives” means the answers they give are correct

· It would be absurd to ask what “correct answers” mean, so stop at “correct answers” as a CTQ

     

     

     

     

     

     

     

     

     

     

     

     

Need

Drivers

CTQs

     

General

Hard to measure

Specific

Easy to measure

Written 6/05
Revised 6/06

Source: CORM Website
First Published: July 2005

Orders Consistently Late Last Quarter

Communications Issues

Sales & Marketing

Fail to alert when price changes may affect volume

Inconsistent adherence to due dates

Fail to check production schedule before promising product

Manufacturing

Fail to keep production schedule updated

Fail to keep inventory updated

Fail to communicate unscheduled equipment down-time

Equipment Issues

Equipment Breakdown

Inconsistent adherence to maintenance dates

Equipment operated outside of specifications

Old equipment, due to be replaced, not operating at peak capacity

External Factors

Major Supplier Filed for Bankruptcy

Just-in-time inventory system failed

Lack of inventory affects 60 orders

New supplier overloaded with new clients

1

What Will You Get?

We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.

Premium Quality

Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.

Experienced Writers

Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.

On-Time Delivery

Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.

24/7 Customer Support

Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.

Complete Confidentiality

Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.

Authentic Sources

We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.

Moneyback Guarantee

Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.

Order Tracking

You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.

image

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

Areas of Expertise

Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.

image

Trusted Partner of 9650+ Students for Writing

From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.

Preferred Writer

Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.

Grammar Check Report

Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.

One Page Summary

You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.

Plagiarism Report

You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.

Free Features $66FREE

  • Most Qualified Writer $10FREE
  • Plagiarism Scan Report $10FREE
  • Unlimited Revisions $08FREE
  • Paper Formatting $05FREE
  • Cover Page $05FREE
  • Referencing & Bibliography $10FREE
  • Dedicated User Area $08FREE
  • 24/7 Order Tracking $05FREE
  • Periodic Email Alerts $05FREE
image

Our Services

Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.

  • On-time Delivery
  • 24/7 Order Tracking
  • Access to Authentic Sources
Academic Writing

We create perfect papers according to the guidelines.

Professional Editing

We seamlessly edit out errors from your papers.

Thorough Proofreading

We thoroughly read your final draft to identify errors.

image

Delegate Your Challenging Writing Tasks to Experienced Professionals

Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!

Check Out Our Sample Work

Dedication. Quality. Commitment. Punctuality

Categories
All samples
Essay (any type)
Essay (any type)
The Value of a Nursing Degree
Undergrad. (yrs 3-4)
Nursing
2
View this sample

It May Not Be Much, but It’s Honest Work!

Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.

0+

Happy Clients

0+

Words Written This Week

0+

Ongoing Orders

0%

Customer Satisfaction Rate
image

Process as Fine as Brewed Coffee

We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.

See How We Helped 9000+ Students Achieve Success

image

We Analyze Your Problem and Offer Customized Writing

We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.

  • Clear elicitation of your requirements.
  • Customized writing as per your needs.

We Mirror Your Guidelines to Deliver Quality Services

We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.

  • Proactive analysis of your writing.
  • Active communication to understand requirements.
image
image

We Handle Your Writing Tasks to Ensure Excellent Grades

We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.

  • Thorough research and analysis for every order.
  • Deliverance of reliable writing service to improve your grades.
Place an Order Start Chat Now
image

Order your essay today and save 30% with the discount code Happy