ENBIS-10 in Antwerp

12 – 16 September 2010 Abstract submission: 1 January – 31 May 2010

My abstracts

 

The following abstracts have been accepted for this event:

  • Targeted Bayesian Network Learning

    Authors: Aviv Gruber and Irad Ben-Gal
    Affiliation: Tel Aviv University
    Primary area of focus / application:
    Submitted at 9-Mar-2010 13:32 by Irad Ben-Gal
    Accepted
    13-Sep-2010 16:00 Targeted Bayesian Network Learning
    We present the Targeted Bayesian Network (TBN) model that can support optimization processes in industrial and service systems. The proposed model learns the space and the relations among unknown variables of the system from a given dataset. While the underlying learning objectives of previous Bayesian network models were focused on best approximating the joint probability distribution of the learned domain, we aim to best approximate the conditional probability distribution of a predetermined target variable as a function of the rest of the domain variables. We show that a proper network for such a task is obtained by maximizing the mutual information weights on the target variable and its parents solely. We suggest some criteria to address the trade-off between the network’s complexity and its accuracy, as expressed through information gain measures. We illustrate, via a practical example, how in the TBN model lead to a tremendous reduction of the learning complexity with a very little information waiving.
  • A methodology for realignment of quality cost elements

    Authors: Shuki Dror
    Affiliation: ORT Braude College, Israel
    Primary area of focus / application:
    Submitted at 10-Mar-2010 21:04 by Shuki Dror
    Accepted
    15-Sep-2010 11:50 A methodology for realignment of quality cost elements
    This work presents an innovative research methodology that enables a company to realign its quality cost elements in order to improve implementation of its quality system. The methodology combines the following methods: the House of Quality Costs (HOQC) method, which translates the desired improvement in failure costs (internal and external) into controllable efforts (prevention and appraisal costs) and ranks them by relative importance, the Analysis of Variance (ANOVA) method, which supports selection of vital quality costs, and the enhanced control chart method, used to validate the strong causal linkages in HOQC.
    Two case studies are presented to illustrate the application of the developed methodology. In the furniture firm there are basically two vital sources of defects that could affect the overall cost of quality - raw materials and production process. In the food firm, traditional quality control was not enough to eliminate quality problems from the production processes. Hence, the Hazard Analysis Critical Control Point (HACCP) was implemented. The methodology applied in this work proved itself capable of effectively handling realignment of quality cost elements. The methodology emphasizes adopting a systemic approach for selecting the vital controllable efforts in response to vital failure costs, as well as for detecting changes in the quality cost structure.
  • CHALLENGES IN CONSTRUCTING TIME SERIES MODELS FROM PROCESS DATA

    Authors: Johannes Ledolter and Soren Bisgaard
    Affiliation: University of Iowa
    Primary area of focus / application:
    Submitted at 31-Mar-2010 14:55 by Johannes Ledolter
    Accepted
    13-Sep-2010 10:00 CHALLENGES IN CONSTRUCTING TIME SERIES MODELS FROM PROCESS DATA
    Correlated input and output sequences of industrial processes are often auto-correlated. Analysts who wish to construct models for such processes from input and output sequences alone must be careful as the autocorrelations in individual time series can masquerade as cross-correlations. Several different models may appear equally plausible, depending on how the process had been operated.

    In this paper we study the input and output sequences from two industrial case studies using a variety of time series tools. The results of our study illustrate that in feedback situations, where the current input is adjusted – either automatically or manually – on the basis of past output, different models may fit the data equally well. Which of these models describes the actual transfer function is unclear, unless a known dither signal can be added to the input to allow for an unambiguous identification of the transfer function. This problem is related to spurious regression and is of interest to econometricians who often deal with time series where feedback, known or unknown, may be responsible for misleading interpretations.
  • Functional Safety Modelling for Automotive System Applications

    Authors: Gerhard Rappitsch
    Affiliation: SensorDynamics AG, Graz-Lebring, Austria
    Primary area of focus / application:
    Submitted at 11-Apr-2010 10:03 by Gerhard Rappitsch
    Accepted
    14-Sep-2010 15:20 Functional Safety Modelling for Automotive System Applications
    Functional safety for systems in automotive electronics is defined as robustness against failures that may endanger a drivers health. For instance, electronic stabilisation systems like ESP must be designed in such a way that the functionality of the system is guarantueed even if single components fail due to reliability issues.

    A critical point is the prediction of the safety related reliability performance to guarantuee specific quality levels at the point of product delivery to the customer. The functionality at system level is defined by the functional reliability of the individual systems blocks and components. The complete system is designed in a top-down manner with respect to the underlying technical safety concept.

    To enable a correct reliability prediction, analytical models are developed for the individual electronic system blocks. The propagation of block functions to system functions is performed using proper stochastic networks and transfer function modelling. The proposed hierarchical approach allows for bottom-up verification of electronic systems with respect to safety related functionality. The analytical models are validated using stochastic simulation.
  • Robust Design with binary response variables

    Authors: Johan Olsson Pietro Tarantino
    Affiliation: Tetra Pak
    Primary area of focus / application:
    Submitted at 13-Apr-2010 17:00 by Johan Olsson
    Accepted
    13-Sep-2010 10:00 Robust Design with binary response variables
    Robust Design (RD) is a methodology to reduce performance variation in systems. These systems are influenced by factors that are controlled by designers (control factors) and by factors not possible or too expensive to control such as environmental conditions, raw material properties and aging (noise factors). The simple idea of Robust Design is to make the system behave in the same way no matter what values these noise factors take. This is achieved by a proper choice of the control factors configuration.
    The property of system for which we want to minimize variation is usually called response variable. From the statistical efficiency point of view, it should be measurable and continuous. In such a case, there is a vast literature that can be used for planning and analyzing RD experiments.
    This work, instead, aims at discussing how to make system robust when binary variables are used as responses. Examples of such variables are “defect – no defect”, “bad-good”, “acceptable-not acceptable”, etc. These are usually easier to register but, on the other hand, the sample size required to get a good enough statistical power is much higher than in the case of continuous variable.
    In particular, the results from a linear and a logistic model are compared for an industrial RD study. Related issues as the choice of sample size, the verification of models assumption and the responses optimization are discussed. We also discuss the interpretation of the results and to an efficient communication of these to the project team.
  • A study of the hierarchical ordering principle and the effect heredity principle in factorial experimental designs.

    Authors: Lois Dodson, Matthew Dodson, René Klerx
    Affiliation: SKF Group Six Sigma
    Primary area of focus / application:
    Submitted at 13-Apr-2010 18:42 by Rene Klerx
    Accepted (view paper)
    14-Sep-2010 10:00 A study of the hierarchical ordering principle and the effect heredity principle in factorial experimental designs
    The hierarchical ordering principle and the effect heredity principle imply that higher order interactions can be ignored, and that any interaction will not be statistically significant unless the corresponding main effects are also statistically significant. Many basic engineering principle defy this logic. For example the ideal gas law is a 3-way interaction with no main effects. This paper will explore these principles and their applicability to classical engineering systems.