ENBIS7 in Dortmund
24 – 26 September 2007
My abstracts
The following abstracts have been accepted for this event:

Some thoughts about the use of kriging and smoothing techniques for metamodelling purposes
Authors:
Marco Ratto and Andrea Pagano (Euroarea Economy Modelling Centre, Ispra, Italy)
Primary area of focus / application:
Submitted at 22Jun2007 14:42 by Marco Ratto
Accepted
In this paper we discuss the problem of metamodelling using kriging and smoothing techniques. Both methodologies will be applied on a number of test cases in order to highlight pro's and con's of each approach. The kriging approach will be based on the Gaussian Emulation Machine. The smoothing approach is carried out using nonparametric techniques (statedependent parameter modelling).
References:
J. Oakley, A. O'Hagan, Probabilistic sensitivity analysis of complex models:
a Bayesian approach, J. Royal Stat. Soc. B 66 (2004) 751769.
Ratto, M., S. Tarantola, A. Saltelli, and P. C. Young (2006).
Improved and
accelerated
sensitivity analysis using State Dependent Parameter models. Technical
Report EUR 22251 EN, ISBN 9279020366, Joint Research Centre, European
Commission.

Planning Dose Response Curve Experiments with unsufficient observations per individual
Authors:
Winfried Theis, Henk van der Knaap
Primary area of focus / application:
Submitted at 22Jun2007 14:44 by
Accepted
In food research it is often not feasible or ethically passable to take enough measurements from the subjects under observation. This is especially true for trials where children are involved. Therefore we tried to find an optimal way to spread an insufficient number of observations per individual over time which still enables us to estimate a doseresponse profile over time.

Analytical method validation based on the total error concept. Comparison of alternative statistical approaches
Authors:
Bernadette Govaerts, Myriam Maumy, Walthère Dewé, Bruno Boulanger
Primary area of focus / application:
Submitted at 22Jun2007 14:52 by
Accepted
In pharmaceutical industries and laboratories, it is crucial to control continuously the validity of analytical methods used to follow the products quality characteristics. It must be assessed at two levels. The “prestudy” validation aims at demonstrating beforehand that the method is able to achieve its objectives. The “instudy” validation is intended to verify, by inserting QC samples in routine runs, that the method remains valid over time. At these two levels, the total error approach considers a method as valid if a sufficient proportion of analytical results are expected to lie in a given interval [a,a] around the (unknown) nominal value.
This paper presents and compares four approaches, based on this total error concept, of checking the validity of a measurement method at the prestudy level. They can be classified into two categories. For the first, a lower confidence bound for the probability p of a result lying within the acceptance limits is computed and compared to a given acceptance level. Maximum likelihood and delta methods are used to estimate the quality level p and the corresponding estimator variance. Two approaches are then proposed to derive the confidence bound: the asymptotic maximum likelihood approach and a method due to Mee. The second category of approaches checks whether a tolerance interval for hypothetical future measurements lies within the predefined acceptance limits [a,a]. Betaexpectation and betagammacontent tolerance intervals are investigated and compared in this context.

Sizing Mixture Designs
Authors:
Pat Whitcomb and Gary W. Oehlert
Primary area of focus / application:
Submitted at 22Jun2007 14:56 by Pat Whitcomb
Accepted
Newcomers to mixture design find it difficult to choose appropriate designs with adequate precision. Standard power calculations (used for factorial design) are not of much use due the colinearity present in mixture designs. However when using the fitted mixture model for drawing contour maps, 3D surfaces, making predictions, or performing optimization, it is important that the model adequately represent the response behavior over the region of interest. Emphasis is on the ability of the design to support modeling certain types of behavior (linear, quadratic, etc.); we are not generally interested in the individual model coefficients. Therefore, power to detect individual model parameters is not a good measure of what we are designing for. A discussion and pertinent examples will show attendees how the precision of the fitted surface relative to the noise is a critical criterion in design selection. In this presentation, we introduce a process to determine if particular mixture design has adequate precision for DOE needs. Attendees will take away a strategy for determining if a particular mixture design has precision appropriate for their modeling needs.

PROCESS CAPABILITY PLOTS REVISITED
Authors:
Kerstin Vännman
Primary area of focus / application:
Submitted at 22Jun2007 15:11 by
Accepted
To assess the capability of a manufacturing process, using a random sample, it is common to apply confidence intervals or hypothesis tests for a process capability index. Alternatively, an estimated process capability plot or safety region in a process capability plot can be used. Usually a process is defined to be capable if the capability index exceeds a stated threshold value, e.g. Cpm > 4/3. This inequality can be expressed graphically as a capability region in the plane defined by the process parameters, obtaining a process capability plot. Either by estimating this capability region in a suitable way or by plotting a safety region, similar to a confidence regions for the process parameters, in the process capability plot a graphical decision procedure is obtained, taking into consideration the uncertainty introduced by the random sample. The estimated capability region and the safety region are constructed so that they can be used, in a simple graphical way, to draw conclusions about the capability at a given significance level. With these methods it is also possible to monitor, in the same plot, several characteristics of a process. Under the assumption of normality we derive a new elliptic safety region and compare it, with respect to power, with the previous derived rectangular and circular safety regions for the capability index Cpm. We also present some new results regarding the estimated capability region for the capability index Cpk. Examples are presented.

A Control Chart for High Quality Processes with a FIR Property Based on the Run Length of Conforming Products
Authors:
S. Bersimis, M.V. Koutras, and P.E. Maravelakis (University of Piraeus, Piraeus, Greece)
Primary area of focus / application:
Submitted at 22Jun2007 15:24 by
Accepted
Abstract
The control chart based on the geometric distribution which is in general known as the geometric control chart has been shown to be competitive with the classic pchart (or with the npchart) for monitoring the proportion of nonconforming items, especially for applications in high quality manufacturing environments. In this paper we present a new type of geometric chart for controlling attribute data which is based on the run length of conforming items. Specifically, we present the basic principles for designing and implementing the new control chart, after reviewing the control charting procedures using the length of conforming units between two consecutive nonconforming units. This new control chart has an appealing performance.
Key Words: Statistical Process Control, Control Charts, Conforming Run Length, Geometric Control Charts, Shewhart Control Charts, Runs Rules, Scans Rules, Patterns, Markov chain, Average Run Length.