 Products
 Markets Served
 Products by Temperature Range
Products By Temperature
 Document Center
Product Documentation and Specifications
 About RSI
This is a quick guide to process characterization parameters starting with the statistical aspect of control and moving into process control and chart evaluation.
The material is presented in three sections:
1. Statistics: F and T Test: Are your data sets from the same data population?
2. Process Control: UCL and LCL: Is your product within specified limits?
3. Process Excellence: Cp and Cpk: Is your process staying within range?
The F Test and T Test are two methods of determining the independence of two sets of data from the argument These two sets of data are representative subsets of a larger set of data, and as such are not independent. Both the F and T test return a number between 0 and 1 that indicates the probability that the argument is true, 1 being that these numbers are certainly from the same data population, and 0 being that these numbers are totally independent. We will use the data in Chart 1 as an example for our calculations.
Sample 1 
Sample 2 
Sample 3 
Sample 4 

Control 
EXP 1 
EXP 2 
EXP 3 

100 
104 
121 
101 

100 
103 
121 
101 

100 
103 
120 
100 

100 
103 
120 
100 

100 
103 
120 
100 

105 
105 
125 
106 

105 
105 
125 
106 

105 
105 
125 
106 

105 
105 
125 
106 

105 
105 
125 
106 

110 
107 
129 
110 

110 
107 
131 
110 

110 
107 
131 
110 

110 
107 
130 
110 

110 
107 
130 
110 

Average  105.0 
105.1 
125.2 
105.5 
Variance  17.86 
2.64 
17.46 
16.70 
F Test  N/A 
0.001 
0.967 
0.902 
T Test  N/A 
0.955 
0.000 
0.761 
S.D.  4.23 
1.62 
4.18 
4.09 
UCL  117.68 
109.94 
137.73 
117.72 
LCL  92.32 
100.19 
112.67 
93.21 
Determines the independence of the variances of two sample populations.
Population size: Minimum 10, Process Minimum 30, more
is better.
Excel Syntax: =ftest(range1,range2)
The F Test is a comparative test, along with the T Test, and cannot be used to characterize process conditions. It is used to test experimental runs to find increases or decreases in variance when compared to a control. Notice that the F Test returns an absolute value, so it will not tell you whether the variance has increased or decreased from the control, but only that it is not the same.
T Test:
Determines the independence of the means of two sample populations.
Population size: Minimum 10, Process Minimum 30, more
is better.
Excel Syntax: =ttest(range1,range2,#,#)
The first # is 1 or 2, depending on the number of “tails.” I cannot think of, nor can I find a good example when a “one tail” analysis is helpful, so I always use 2. The second # is the type of test to perform. We are concerned primarily with either 2 (equal variance) or 3, (unequal variance). Type 1 is for a sample that has undergone a process and we want to compare matched values, for example a board that has been tested for hardness, rigidized, and tested again.
The T Test is the complimentary function to the FTest. It will tell you what the odds are of two samples of data actually being part of a larger set of data with the same overall mean. It will not indicate anything about variance, and is a comparative test, so it will not work for process benchmarking.
Ideally, if someone were to evaluate an experimental material as a replacement for a standard raw material, you would want both the F Test and the T Test to be as high as possible. In reality, they will almost never equal 1, so as a rule of thumb 0.95 or higher indicates the argument is true. A direct replacement would be indicated by an FTest and TTest result of 0.95 or higher for both tests. Chart 1 has some completely madeup data to show as an example.
The Control indicates a sampling of data from a current process. The average value of the measured variable is 105, with a variance of 17.9, roughly. A change was made, and new sample data was recorded as EXP 1. The average is slightly higher at 105.1, but the variance has dropped to 2.6, roughly. The F and Ttest values are summarized below the column for EXP 1. As you can see, the TTest indicates a good match (0.955), but the FTest indicates that the variances are radically different, and as such is not a “replacement.”
EXP 2 has the same variance as the control, but the mean is skewed higher to 125. This results in an FTest value of 0.967, but a TTest of 0, indicating that although the variances can be considered the same, the means are too far off to be from the same data source.
The last set of data represents a series that cannot be considered from the same data population as the control for either the F Test or the T Test, with values of 0.902 and 0.761 respectively. This data does however fit within our process and is in fact acceptable as a replacement, based on the next section.
Process Control is more concerned about limiting perturbation in the system (from our process to our customers) than statistical independence. Ideally, we can do anything to our process, and as long as the customer sees material within the range we have mutually agreed upon, everything is ok.
This agreed upon range is bounded by the Upper Specification Limit (USL) and the Lower Specification Limit (LSL). These may be arbitrarily set or they may be based on process capability. Usually the spec limits are arbitrarily set.
What our process is capable of doing is called the Upper Control Limit and the Lower Control Limit. This is derived statistically from qualifying measurements of the product after it has gone through the process. As long as the Control Limits are within the range of the Specification Limits, life is good.
To calculate the control limits, we need to get the average and standard deviation of our target data.
Average:
Calculates the arithmetic mean of a series of data.
Population size: Minimum 2, Process Minimum 30
Excel syntax: =average(range1)
Calculates the deviation from the mean of a series of data.
Population size: Minimum 3, Process Minimum 30
Excel syntax: =stdev(range1)
There are about six different versions of the Standard Deviation function for Excel. Stdev() considers the data series a sample of a larger one. Stdevp() considers the series of numbers the entire population. Stdev() tends to return a higher value than Stdevp(), although at larger numbers of data samples (more than 30) they are very close (4 sig digits or so?) to one another. Also the Excel formula VAR() calculates the variance of the series, and returns a value that is the value of Stdev() squared. I like Stdev(). Excel suggests that there is a maximum of 30 allowable samples for Stdev, but more than that does not seem to affect the function. If there is a statistical reason for it, I do not know that reason.
UCL:
Upper Control Limit, the value that a controlled process should not exceed, 99% of the time.
Population Size: Process Minimum 30, more is better
Excel Syntax: =average(range1)+3*stdev(range1)
Excel does not have an inherent UCL or LCL formula, nor does it have it in the data analysis tool pack, so it has to be manually entered. The formula is the average plus three times the standard deviation. This represents the maximum level we can expect the measured data to be at 99% of the time with an incontrol process. This is generally accepted as the upper performance limit of a process, even though it will occasionally exceed that level.
LCL:
Lower Control Limit, the value that a controlled process should not go below, 99% of the time.
Population Size: Process Minimum 30, more is better.
Excel Syntax: = average(range1)3*stdev(range1)
This represents the minimum level we can expect the measured data to be at 99% of the time with an incontrol process. This is generally accepted as the lower performance limit of a process, even though it will occasionally fall below that level.
I’d like to use the most recent samples from Harbison Walker that we rigidized twice to get to a weight range of 2.00 to 2.69 pounds. The data is listed here as Chart 2.
Chart 2: Harbison Walker Weight Results
2.42 
2.43 
2.50 
2.53 
2.48 
2.51 
2.53 
2.46 
2.44 
2.53 
2.43 
2.39 
2.57 
2.28 
2.48 
2.34 
2.51 
2.27 
2.41 
2.48 
2.47 
2.38 
2.40 
2.50 
2.47 
2.38 
2.48 
2.30 
2.38 
2.44 
2.38 
2.41 
2.34 
2.38 
2.41 
2.40 
2.47 
2.33 
2.39 
2.40 
2.47 
2.33 
2.39 
2.40 
2.42 
2.43 
2.45 
2.36 
2.48 
2.28 
2.35 
2.35 
2.44 
2.32 
2.42 
2.36 
2.36 
2.55 
2.38 
2.46 
2.46 
2.47 
2.36 
2.38 
2.46 
2.42 
2.36 
2.46 
2.34 
2.41 
2.40 
2.55 
2.35 
2.42 
2.45 
2.37 
2.37 

Average 
2.42 
Standard Deviation 
0.067 
UCL 
2.62 
LCL 
2.22 
If we calculate a target (assuming normal distribution) from the agreed upon Upper and Lower Specification Limits of 2.00 and 2.69, we are aiming for an average of 2.35. As you can see, our average is actually 2.42. Using the data set the USL and LSL were calculated from, the data in Chart 2 has an FTest of 0.00000238 and a TTest of 0.0000000127. Using those tests as criteria, this most recent set of Harbison Walker material would be “bad.”
For comparison, contrast a twoinch air gap with two inches of refractory ceramic fiber insulation. For these calculations, we need to determine some values first for Thermal Diffusivity of air and for RCF board at 18 pounds per cubic foot.
However, using process capability, we can see that the UCL and LCL calculated from the Data in Chart 2 fits entirely within the USL and LSL. This can be seen graphically in Chart 1.
Chart 1: Harbison Walker Control Chart
Yes, that chart is a mess. But it’s really two charts in one. The first chart we’ll call our Specification Chart, is what we say we can provide for board weights. It is represented by the red line, the target weight of 2.345, and the upper and lower yellow lines, our USL and LSL. As long as we stay within this range, our material is considered acceptable.
The second chart is represented by the black line, which is the average of all the data points (the jagged blue line) for this most recent run. The process capability that we demonstrated during that particular run is shown by the light blue lines, our UCL and LCL, showing where we can expect 99% of the data points from this most recent run to reside.
This information shows that while our average went up to about 2.42, our variance shrank quite a bit. From a process point of view, this is excellent. The mean did not go up far enough to drive the upper control limit beyond the upper specification limit, so no more material will be out of specification than before. In fact, the upper control limit is still appreciably lower than the spec limit, which would suggest that there may be a lower number of unacceptable parts.
The reason the FTest and the TTest show this set is “unacceptable”, while process indicators show that it is acceptable, is that the FTest and TTest only check for “sameness.” The variance of the most recent run is much lower than the original samples, which would result in a failed Ftest. The mean is skewed upward, failing the TTest. If the variance of the most recent run was larger than the variance of the original sample, then from a process point, it would be unacceptable, but that is not the case here, and the statistical tests can’t differentiate this.
Cp is the ratio of how much processing range you are allowed to what your processing range is. The higher the number, the less of your allowable range you are using. This can be a nice tool to indicate whether or not you need to make process improvements, or if you can finetune your specification limits or possible target a lower value while remaining within the same specification limit.
Cp:
Process Capability is an indicator of how well a process is staying within
its allowable range.
Population size: Process minimum 30, more is better
Excel syntax: =(USLLSL)/(6*stdev(range1))
A Cp value of 1.0 indicates that the USL and LSL are the same as the UCL and LCL. Another way of saying this is that the process is using 100% of its allowable range. This assumes a normal distribution around the mean. Typically a CP value of 1.33 is desired, indicating that a particular process is using 75% of its allowable range.
k:
Centering is a correction factor applied to Cp to make up for a noncentered distribution.
Population size: Process Minimum 30, more is better
Excel Syntax: =abs(((USL+LSL)/2)average(range1))/(0.5*(USLLSL))
The closer this value is to zero, the more centered the distribution is.
Cpk:
Demonstrated Excellence is Cp with a correction for noncentered data.
Population Size: Process Minimum 30, more is better
Excel Syntax: =(1k)*Cp Or
=(1abs(((USL+LSL)/2) – average(range1))/(0.5*(USLLSL)))*(USL LSL)/(6*stdev(range1))
This takes the Cp for both above and below the target mean, and reports on whichever side has the highest usage of the allowable range. This is a “worst case scenario” indicator of process capability, and as such, is usually the preferred process capability index to report.
Using the data from Chart 2 (Harbison Walker) we can calculate a Cp of 1.71. This is a fantastic value, suggesting that the process is using somewhere around 60% of its allowable range. However, we know from looking at the control chart (Chart 2), that the mean is skewed, so this number isn’t telling us the correct story. There is more of the allowable range being used above the target mean than below. To show that with process capability, we need to correct for this skew by applying the k factor.
In this case, k is equal to 0.2 or so. For k less than 0.05, Cp is usually acceptable for process characterization. Since we have to calculate k anyway to check for centering, I would always report Cpk
Cpk in this instance is 1.35. Cpk is interpreted in the same fashion as Cp, so this would indicate that we are using 75% of our allowable range, even though it is only on one side of the target mean. This is slightly better than the 1.33 target; hence, the process is in demonstrable control.