Overview of all Abstracts

The following PDF contains the Abstractbook as it will be handed out at the conference. It is only here for browsing and maybe later reference. All abstracts as PDF

My abstracts

 

The following abstracts have been accepted for this event:

  • Efficient Design in Conjoint Analysis and Alike

    Authors: Rainer Schwabe
    Primary area of focus / application:
    Submitted at 7-Sep-2007 21:31 by
    Accepted
    Conjoint analysis is a popular tool in marketing research. Stated choice experiments are performed to evaluate the influence of various options on the comsumers' preferences. The quality of the outcome of such experiments heavily depends on its design, i.e. on which questions are asked. The present talk gives an overview of the results of a research project on "Efficient Design in Conjoint Analysis" carried out at the universities of Münster and Magdeburg (joint work with U. Graßhoff [Magdeburg], H. Großmann [London] and H. Holling [Münster]).
  • Calibration of instruments using LogVariance models

    Authors: Diego Zappa - Massimiliano Pesaturo
    Primary area of focus / application:
    Submitted at 7-Sep-2007 22:42 by
    Accepted
    Design of experiments are often programmed to estimate the mean response surface. Typically it is often assumed the homoschedasticity hypothesis over the experimental domain. The most recent literature has stressed the importance of evaluation also of the variance response surfaces both to assess the existence of heteroschedasticity and, in the latter case, for a matter of optimization (maximize /minimize the mean , minimizing the expected variance). In this context we exploit the so called Log Linear Variance models (also known as LogVariance models) to assess the calibration of a temperature sensors integrated inside MEMS Chip (Micro Electro Mechanical Systems) which is the core component of Lab-On-Chip (LOC) systems used in DNA clinical analysis. We will show the effectiveness of the procedure using data measured in a real experiment. In addition, because of the computational efforts and the needs of a sw tool easily sharable among researchers, it has been prepared an excel spreadsheet freely available from the authors.
  • Selecting explanatory variables with the modified version of Bayesian Information Criterion

    Authors: Malgorzata Bogdan (Purdue University, West Lafayette, IN, USA)
    Primary area of focus / application:
    Submitted at 8-Sep-2007 10:23 by
    Accepted
    Business or science data are often stored in large data bases. Looking for relationships between variables represented in such data bases is one of the most important aspects of data mining. In this talk we consider the problem of identifying factors related to a given continuous characteristic. The common approach to this problem relies on fitting the multiple regression model. The usual goal is to choose the simplest model which would include most of important factors related to the response variable. We will demonstrate that in the situation when the number of variables in the data base is much larger than the number of cases the standard model selection criteria like Akaike Information Criterion or Bayesian Information Criterion (BIC) have a tendency to include many spurious variables. This phenomenon is related to the well known problem of multiple testing. We will present the modified version of BIC which adjusts for this problem and its rank extension designed for the situation when the distribution of the response variable is strongly different from normal. We will illustrate the performance of our method by computer simulations and real data applications.
  • Design of Experiments for Mean and Variance

    Authors: Marta Emmett, Peter Goos, Eleanor Stillman
    Primary area of focus / application:
    Submitted at 8-Sep-2007 11:28 by
    Accepted
    The great majority of experimental designs are directed towards estimating the
    mean of a single response variable under homoscedasticity. However, in many
    practical applications the variance structure is not known and the variance, as
    well as the mean, needs to be estimated. Estimating the mean and variance
    simultaneously is particularly relevant in quality control experiments. The
    first person to bring attention to the importance of reducing variability in
    such experiments was Taguchi in the 1980s. Taguchi methods seek to design a
    product or a process whose performance meets a specified target on average and
    exhibits little variability. This variability may be a consequence of
    environmental factors, controllable and uncontrollable factors during the
    manufacturing process and component deterioration.

    More recently, Atkinson & Cook (1995) and Vining & Schaub (1996) developed
    optimal design theory for estimation of mean and variance functions
    simultaneously. Both papers assume that the variance function is estimated
    using the residuals of the regression function for the mean. However,
    researchers often prefer using sample variances for quantifying and modelling
    variation. This has the advantage that the responses of the variance function
    do not depend on the specification of the mean function. If sample variances
    are utilized, the optimal design approaches of Atkinson & Cook (1995) and
    Vining & Schaub (1996) are no longer ideal. Therefore, building on the work of
    Goos, Tack and Vandebroek (2001), we propose a new optimal design criterion for
    the simultaneous estimation of mean and variance functions, where it is assumed
    that sample variances are used for estimating the latter function.

    References:

    Atkinson, A.C. and Cook, R.D. (1995). D-optimum designs for heteroscedastic
    linear models. Journal of the American Statistical Association, 90, 204-212.

    Goos, Peter, Tack, L., Vandebroek, M. (2001). Optimal designs for variance
    function estimation in using sample variances. Journal of Statistical Planning
    and Inference. 92, 233-252.

    Vining, G.G. and Schaub, D. (1996). Experimental designs for estimating both
    mean and variance functions. Journal of Quality Technology. 28, 135-147.
  • Improvement of a manufacturing process by integrated physical and numerical experiments: a case-study in the textile industry

    Authors: Stefano Masala (1), Paola Pedone (2), Martina Sandigliano (1) and Daniele Romano (2)
    Primary area of focus / application:
    Submitted at 8-Sep-2007 11:49 by
    Accepted
    In hi-tech industry, like aerospace and microelectronics, the combined use of simulation and lab tests is a daily practice
    in the product development phase. It is easy to forecast that it will spread soon also in less knowledge-intensive sectors.
    However, although Design of Experiments and Computer Experiments provide sound methodologies for running experiments in
    physical and numerical settings respectively, the integration between the two kinds of investigation is still in its infancy.
    Yet in that case the sequential experimentation approach, introduced by George Box for physical experiments some fifty years
    ago, would have an even wider scope.

    The work describes the results of a research project which is currently taking place at Technova Srl, a medium size textile
    firm in Sardinia (Italy). The company produces flocked yarn, a component which, after weaving, becomes a fabric for a wide
    range of technical applications. Typical end products are coverings for seats and other components in car interiors. The
    yarn is formed by finely cut fibers (flock) applied to an adhesive coated carrier thread by the electrostatic force. The
    research focuses on the improvement of the manufacturing process. To this end, we exploit all kind of information sources
    available, from historical production data to physical experiments on pilot and production machines and experiments on
    different process simulators. We show that the results obtained by this approach are well beyond the initial expectations of
    the company in terms of enhanced product quality as well as process economy and flexibility.

    Keywords: DoE, Computer experiments, Sequential experimentation, Flocking process, Quality improvement.

    Affiliations:

    (1) Technova Srl, Olbia, martina.sandigliano@novafloor.it
    (2) University of Cagliari, Dept. of Mechanical Engineering, Cagliari, romano@dimeca.unica.it
  • An automotive experience in applying DoE to improve a process

    Authors: Laura Ilzarbe, M. Tanco, M. Jesús Alvarez, E. Viles
    Primary area of focus / application:
    Submitted at 8-Sep-2007 13:38 by
    Accepted
    Laser welding is becoming more widely used within the automotive industry because of its reputation for high quality and precision. However, achieving the best set of parameter settings for this process is no trivial task and the industry has encountered many problems in the implementation of laser welding. These problems lead to defects which can be very expensive, so the industry is keen to optimise the process to make it as cost effective as possible.

    In this paper we present the application of the design of experiments in a car manufacturing company to improve their technical knowledge of the laser welding process and the positive impact that this research already had on the number of defects observed.
  • Implementation of a quality plan for the monitoring of blood treatment process for the Belgian Red Cross

    Authors: A. Guillet, B. Govaerts, A. Benoit
    Primary area of focus / application:
    Submitted at 8-Sep-2007 19:00 by
    Accepted
    The Belgian Red Cross has to develop quality control procedures to monitor its blood treatment processes in order to be in conformity with the Belgian legislation. This project implies the adaptation of statistical quality control techniques to the particular problematic of blood treatment. Indeed, the non normal distributions and censored data due to the measurement stuff of most of the products made them adapt the classical quality tools.
    First, we defined a sampling design for each measurement. Then, we made a brief descriptive analysis of the new data to check the goodness of the plan. After several months, we created some graphs to verify if the specification limit are respected and we computed some statistics on the collected data in order to compare the results between the three sites, to determine if the processes are under control and to evaluate the client-provider risk. Some of the graphs are automatically updated everyday whereas the statistics and the other graphs are monthly created as a report.
    To realise it, we had to implement the necessary tools in a statistical software in such a way that every technician can use it. Thus, we chose to make them encode the data directly in worksheets of the statistical software formatted for the different products. Moreover, they followed a small course adapted to their use of the software in order that they can understand how to use it, particularly the use of the macros, and how to read the outputs and detect that there is a problem in the quality of the products.
  • Mixing Krigings for Global Optimization

    Authors: D. Ginsbourger, C. Helbert, L. Carraro (Ecole des Mines de Saint-Etienne)
    Primary area of focus / application:
    Submitted at 8-Sep-2007 20:49 by David Ginsbourger
    Accepted
    Over the last 5-10 years, numerical simulations of stochastic and deterministic systems have become more accurate, but also
    more time consuming. This makes it impossible to study simulators exhaustively, especially when the number of parameters grows
    large. Consequently, Computer Experiments is a field of study in expansion; application areas include crash-test studies,
    reservoir forecasting, nuclear criticity, etc. We focus here on surrogate-based global optimization techniques for this kind of complex models.

    Our starting point is the E.G.O. algorithm, which is based on a kriging metamodel. In a first part, we recall in detail how kriging
    allows building sequential exploration strategies dedicated to global optimization. We point out some problems of kernel selection that are often skipped in the litterature of kriging-based optimization.

    In a second part, we introduce an extension of the EGO algorithm based on a mixture of kriging models (MKGO). We emphasize on how
    a mixture of kernel can lower the risk of misspecifying the kernel structure and its hyperparameters, especially when the estimation
    sample is small. The proposed approach is illustrated with classical deterministic functions (Branin-Hoo, Goldstein-Price), and
    compared with existing results.

    We also present a study carried out on gaussian processes, and observe the relations between the quality of covariance estimation
    and the performances obtained in kriging-based optimization. We finally give a bayesian interpretation of MKGO, and discuss the
    in-and-outs of choosing the prior distribution of the covariance hyperparameters.

    This work was conducted within the frame of the DICE (Deep Inside Computer Experiments) Consortium between ARMINES, Renault, EDF,
    IRSN, ONERA and TOTAL S.A.
  • Metamodels from Computer Experiments

    Authors: J.J.M. Rijpkema
    Primary area of focus / application:
    Submitted at 9-Sep-2007 14:40 by
    Accepted
    In engineering optimization a direct coupling between analysis models and optimization routines may be very inefficient, as during optimization a large number of iterative calls to possibly time-consuming analysis models may be necessary. In those situations it is preferred to uncouple analysis and optimization through the use of so called metamodels or surrogate models: fast to evaluate approximations for objective and constraint functions.

    For the construction of metamodels there are a number of approaches available, such as Response Surface Methods (RSM), Kriging, Smoothing Splines and Neural Networks. They all estimate the response for a specific design on the basis of information from the full analysis of a limited number of training designs. However, they differ with respect to their underlying conceptual ideas, the calculation effort needed for training and the applicability to specific situations, such as large-scale optimization problems or analysis models based on numerical simulations.

    In this presentation I will focus on two approaches for metamodelling, namely RSM and Kriging. I will review and compare key-concepts and present efficient experimental design strategies to train the models. Furthermore, I will discuss ways to enhance the model building process by taking information on design sensitivities into account.
    This may lead to a reduction of the actual number of full model analyses that is necessary for model training and estimation. It may be very effective, especially in those situations where design sensitivities are easily available such as is the case for analysis models based on the Finite Element Method. To illustrate the use of RSM and Kriging results from a numerical model study will be presented. They throw some light on strengths and weaknesses of both approaches in practical applications.

    Specifics: Related to the field of approximation models, computer simulation and engineering optimization. The preferred form of presentation is an oral presentation of about 20 minutes (including discussion).
  • Variability of Electromagnetic Emissions

    Authors: U. Kappel and J. Kunert
    Primary area of focus / application:
    Submitted at 9-Sep-2007 15:51 by
    Accepted
    Modern cars' equipment consists of an increasing number of electric and electronic devices. Therefore, the risk of electromagnetic interference increases, too, and thus the importance of assuring electromagnetic compatibility (EMC) grows. Additionally, different cars, even of the same model, are more and more equipped individually. This adds an increasing complexity to EMC management.

    The presentation discusses the results of a small study on the relative importance of several factors that might influence the electromagnetic emission of a car's subsystem consisting of a video-, an audio-, and a cell phone component. A fourth factor of interest was the design of the wiring harness connecting these components. All four factors were considered at 4 levels each. For the three components, the levels were chosen in such a way that we would expect increasingly less problems: no grounding of the component's case at all, poor grounding, good grounding and as the fourth level absence of the component. For the harness, we chose the four levels by selecting varying distances between the single wires and circuits. The experiment was done as a fractional factorial design with 16 runs.

    The response was the average excess of the measured radiated emissions over the emissions limit, where the average was taken over all frequencies of interest.

    Between any two of the 16 runs, the wiring was completely redone, even if the next run had the same level for this factor. This gives a measure of the variability caused by rebuilding the wiring, even if we try to rebuild the system in a completely identical way. To check the size of the pure measurement error, each run was measured 5 times, without any changes between the measurements.

    The results seem to indicate that the pure measurement error was relatively small, while the variability caused by rebuilding the wiring was very important. Compared to this "rebuilding error" the effects of the four levels of video, of audio and of the planned variation of the wiring itself were negligible. However, we could show that no grounding or poor grounding of the cell phone component increased the average electromagnetic emissions significantly compared to the other two levels.

    These findings could be reproduced in a confirmation experiment.