Overview of all Abstracts

The following PDF contains the Abstractbook as it will be handed out at the conference. It is only here for browsing and maybe later reference. All abstracts as PDF

My abstracts

 

The following abstracts have been accepted for this event:

  • Some thoughts about the use of kriging and smoothing techniques for metamodelling purposes

    Authors: Marco Ratto and Andrea Pagano (Euro-area Economy Modelling Centre, Ispra, Italy)
    Primary area of focus / application:
    Submitted at 22-Jun-2007 14:42 by Marco Ratto
    Accepted
    In this paper we discuss the problem of metamodelling using kriging and smoothing techniques. Both methodologies will be applied on a number of test cases in order to highlight pro's and con's of each approach. The kriging approach will be based on the Gaussian Emulation Machine. The smoothing approach is carried out using non-parametric techniques (state-dependent parameter modelling).

    References:

    J. Oakley, A. O'Hagan, Probabilistic sensitivity analysis of complex models:
    a Bayesian approach, J. Royal Stat. Soc. B 66 (2004) 751-769.

    Ratto, M., S. Tarantola, A. Saltelli, and P. C. Young (2006).

    Improved and
    accelerated
    sensitivity analysis using State Dependent Parameter models. Technical
    Report EUR 22251 EN, ISBN 92-79-02036-6, Joint Research Centre, European
    Commission.
  • Planning Dose Response Curve Experiments with unsufficient observations per individual

    Authors: Winfried Theis, Henk van der Knaap
    Primary area of focus / application:
    Submitted at 22-Jun-2007 14:44 by
    Accepted
    In food research it is often not feasible or ethically passable to take enough measurements from the subjects under observation. This is especially true for trials where children are involved. Therefore we tried to find an optimal way to spread an insufficient number of observations per individual over time which still enables us to estimate a dose-response profile over time.
  • Analytical method validation based on the total error concept. Comparison of alternative statistical approaches

    Authors: Bernadette Govaerts, Myriam Maumy, Walthère Dewé, Bruno Boulanger
    Primary area of focus / application:
    Submitted at 22-Jun-2007 14:52 by
    Accepted
    In pharmaceutical industries and laboratories, it is crucial to control continuously the validity of analytical methods used to follow the products quality characteristics. It must be assessed at two levels. The “pre-study” validation aims at demonstrating beforehand that the method is able to achieve its objectives. The “in-study” validation is intended to verify, by inserting QC samples in routine runs, that the method remains valid over time. At these two levels, the total error approach considers a method as valid if a sufficient proportion of analytical results are expected to lie in a given interval [-a,a] around the (unknown) nominal value.

    This paper presents and compares four approaches, based on this total error concept, of checking the validity of a measurement method at the pre-study level. They can be classified into two categories. For the first, a lower confidence bound for the probability p of a result lying within the acceptance limits is computed and compared to a given acceptance level. Maximum likelihood and delta methods are used to estimate the quality level p and the corresponding estimator variance. Two approaches are then proposed to derive the confidence bound: the asymptotic maximum likelihood approach and a method due to Mee. The second category of approaches checks whether a tolerance interval for hypothetical future measurements lies within the predefined acceptance limits [-a,a]. Beta-expectation and beta-gamma-content tolerance intervals are investigated and compared in this context.
  • Sizing Mixture Designs

    Authors: Pat Whitcomb and Gary W. Oehlert
    Primary area of focus / application:
    Submitted at 22-Jun-2007 14:56 by Pat Whitcomb
    Accepted
    Newcomers to mixture design find it difficult to choose appropriate designs with adequate precision. Standard power calculations (used for factorial design) are not of much use due the colinearity present in mixture designs. However when using the fitted mixture model for drawing contour maps, 3D surfaces, making predictions, or performing optimization, it is important that the model adequately represent the response behavior over the region of interest. Emphasis is on the ability of the design to support modeling certain types of behavior (linear, quadratic, etc.); we are not generally interested in the individual model coefficients. Therefore, power to detect individual model parameters is not a good measure of what we are designing for. A discussion and pertinent examples will show attendees how the precision of the fitted surface relative to the noise is a critical criterion in design selection. In this presentation, we introduce a process to determine if particular mixture design has adequate precision for DOE needs. Attendees will take away a strategy for determining if a particular mixture design has precision appropriate for their modeling needs.
  • PROCESS CAPABILITY PLOTS REVISITED

    Authors: Kerstin Vännman
    Primary area of focus / application:
    Submitted at 22-Jun-2007 15:11 by
    Accepted
    To assess the capability of a manufacturing process, using a random sample, it is common to apply confidence intervals or hypothesis tests for a process capability index. Alternatively, an estimated process capability plot or safety region in a process capability plot can be used. Usually a process is defined to be capable if the capability index exceeds a stated threshold value, e.g. Cpm > 4/3. This inequality can be expressed graphically as a capability region in the plane defined by the process parameters, obtaining a process capability plot. Either by estimating this capability region in a suitable way or by plotting a safety region, similar to a confidence regions for the process parameters, in the process capability plot a graphical decision procedure is obtained, taking into consideration the uncertainty introduced by the random sample. The estimated capability region and the safety region are constructed so that they can be used, in a simple graphical way, to draw conclusions about the capability at a given significance level. With these methods it is also possible to monitor, in the same plot, several characteristics of a process. Under the assumption of normality we derive a new elliptic safety region and compare it, with respect to power, with the previous derived rectangular and circular safety regions for the capability index Cpm. We also present some new results regarding the estimated capability region for the capability index Cpk. Examples are presented.
  • A Control Chart for High Quality Processes with a FIR Property Based on the Run Length of Conforming Products

    Authors: S. Bersimis, M.V. Koutras, and P.E. Maravelakis (University of Piraeus, Piraeus, Greece)
    Primary area of focus / application:
    Submitted at 22-Jun-2007 15:24 by
    Accepted
    Abstract
    The control chart based on the geometric distribution which is in general known as the geometric control chart has been shown to be competitive with the classic p-chart (or with the np-chart) for monitoring the proportion of nonconforming items, especially for applications in high quality manufacturing environments. In this paper we present a new type of geometric chart for controlling attribute data which is based on the run length of conforming items. Specifically, we present the basic principles for designing and implementing the new control chart, after reviewing the control charting procedures using the length of conforming units between two consecutive non-conforming units. This new control chart has an appealing performance.

    Key Words: Statistical Process Control, Control Charts, Conforming Run Length, Geometric Control Charts, Shewhart Control Charts, Runs Rules, Scans Rules, Patterns, Markov chain, Average Run Length.
  • Controlling Correlated Processes with Binomial Marginals

    Authors: Christian H. Weiß
    Primary area of focus / application:
    Submitted at 22-Jun-2007 15:27 by
    Accepted
    Few approaches towards the control of autocorrelated attribute data have been proposed in literature. If the marginal process distribution is binomial, then the binomial AR(1) model as a realistic and wellinterpretable process model may be adequate. Based on known and newly derived statistical properties of this model, we will develop possible approaches to control such a process. A case study demonstrates the applicability of the binomial AR(1) model to SPC problems and allows to investigate the performance of the control charts suggested.
  • A Bayesian EWMA Method to Detect Jumps at the Start-up Phase of a Process

    Authors: Panagiotis Tsiamyrtzis and Douglas M. Hawkins
    Primary area of focus / application:
    Submitted at 22-Jun-2007 16:18 by
    Accepted
    The start-up phase data of a process are the spine of traditional SPC charting and testing methods and are usually assumed to be iid observations from the In Control distribution. In this work a new method is proposed to model Normally distributed start up phase data where we allow for serial dependence and bidirectional level shifts of the underlying parameter of interest. The theoretic development is based on a Bayesian sequentially updated EWMA model with Normal mixture errors. The new approach makes use of available prior information and provides a framework for drawing decisions and making prediction on line, even with a single observation.
  • INTEGRATING DATA AND MODEL UNCERTAINTIES IN PAINT FORMULATIONS

    Authors: Marco S. Reis, Pedro M. Saraiva and Fernando P. Bernardo (University of Coimbra, Coimbra, Portugal)
    Primary area of focus / application:
    Submitted at 22-Jun-2007 16:19 by
    Accepted
    Formulations frequently play a key role in rather different industrial applications (adhesives, additives, food, rubber, cosmetics, fertilizers and pesticides, photography, medicines, lubricants, perfumes, plastics, etc.). In spite of their relevance, the usual procedure to address such problems is still based upon extensive trial-and-error processes, usually quite inefficient and with rather limited success rates. Alternatively, in certain fields people have also developed deterministic optimization frameworks that take into account several quality-related product performance criteria, adequately constrained by relationships involving compositions or limits to which some components must comply. Such frameworks however neglect any sources of uncertainty and variability that may be present.
    Furthermore, both of the above approaches typically overlook potentially useful information contained in available databases, where data from previous trials is stored, that can (and in fact, should) be used to improve formulation solutions, namely through the estimation of statistical models relating key quality figures to composition variables.
    It is also desirable for a final consumer to get involved in the specification of a value hierarchical structure, so that the conceived product meets its desired specifications and unique preference structures.
    In this communication, we present a framework to develop and implement a robust approach for addressing and solving formulation problems, which:
    •Builds performance/composition relationships from past historical data;
    •Explicitly models and takes into account sources of variability and uncertainty characteristics;
    •Allows for the proper identification of specific customized optimal formulations for a given customer or specific product usage.
    This framework, although generic and easily applicable to other products, was tested within the scope of the paint industry, in order to support the proper identification of optimal waterborne paint formulations.
  • The effect of liberalization in the Italian gasoline sector: higher chance of collusion or incomplete liberalization?

    Authors: Alessando Fassò, Gianmaria Martini, Michele Pezzoni
    Primary area of focus / application:
    Submitted at 22-Jun-2007 16:30 by
    Accepted
    This paper aims to investigate, using a statistical approach, the impact of liberalization of the gasoline retail prices in Italy. The industry nowadays is characterized by an oligopoly made of vertically integrated companies holding a share of 98% of the distribution activities. Moreover gasoline can be classified as a good with a strong anelastic demand (at least in the short-run). These conditions are clearly favorable for an agreement between refiners. On the basis of a data set on the individual recommended gasoline daily prices from 1990 to 2005, the paper investigates two main issues. The first is the impact of some macroeconomic variables on the level of gasoline prices. Countless factors are involved in the generation of prices, first of all the crude oil price and the differences in euro/dollar exchange rate. Other factors like inflation, consumptions, production costs and taxation matter in fixing the price level. Moreover, strategic effects may also be important: the refiners may strategically react asymmetrically to oil price shocks, with immediate upward adjustments and delayed downward adjustments. The second one aims is the assessment of two nonexclusive hypotheses: the un-expected increase – after liberalization – in the observed retail price level, is due either to an increase in the degree of collusion among refiners, and/or to some restrictions to effective competition among retailers (e.g. limits in the opening time, and in the possibility to sell non oil goods, etc.). If the second hypothesis is confirmed, it will provide some evidence that the liberalization process in this sector is incomplete in Italy.