ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts

 

The following abstracts have been accepted for this event:

  • Effects of Time-Varying Data on Multivariate SPC

    Authors: Tiago Rato (Katholieke Universiteit Leuven), Eric Schmitt (Katholieke Universiteit Leuven), Bart De Ketelaere (Katholieke Universiteit Leuven)
    Primary area of focus / application: Process
    Keywords: Principal Component Analysis (PCA), Modelling performance, Time-varying processes, Statistical Process Control (SPC)
    Submitted at 1-Jun-2015 17:12 by Tiago Rato
    Accepted
    7-Sep-2015 16:15 Effects of Time-Varying Data on Multivariate SPC
    Recent advances on data acquisition systems, combined with the low cost of sensors have led to the collection of unprecedentedly high-dimensional data sets that open new challenges to Statistical Process Control (SPC). At the same time, the data structures also grow in complexity, starting to present serial dependency and non-stationary features. The dimensionality aspect motivates the use of latent variables modelling in order to take full advantage of the intrinsic relationships of the data. Principal Component Analysis (PCA) [1, 2] is one of the most known approaches to conduct such task due to its ability to effectively describe the correlation among variables. Moreover, extensions have been proposed to further improve its capabilities by incorporating process’ dynamics (Dynamic PCA [3]) and non-stationarity (e.g. Recursive PCA [4] and Moving Window PCA [5]) into the model. However, the added flexibility also raises numerous parameterization challenges. In DPCA it becomes necessary to define the number of lagged variables to be included in the model, while the adaptive methods require the setting of a forgetting parameter that governs their adaptation speed. Another fundamental drawback of PCA-based approaches is their reliance on the i.i.d. assumptions made in classic PCA and that are still assumed to be valid despite the major changes on data’s characteristics. This is clearly patent in the definition of the control limits, which are often computed using theoretical formulas that no longer describe the process accurately. Nevertheless, PCA-based methodologies can still give a reasonable approximation of reality, provided that the process does not deviate too much from the assumptions.
    Given the aforementioned data diversity it becomes necessary to determine under which conditions the typical modelling assumptions are still valid. Furthermore, it is relevant to quantify their impact on the final performance of PCA-based procedures. With these goals in mind, their modelling performance is evaluated in this contribution for high-dimensional data covering a wide range of time dependency scenarios. The results confirm that as the process complexity increases, a significant decrease in modelling performance occurs. However, classical PCA and DPCA show a remarkable good performance for processes with moderate dynamics even though they fail to effectively cope with it. This deficiency also makes them clearly inadequate for monitoring non-stationary processes. On the other hand, given appropriate forgetting parameters, RPCA and MWPCA can follow the natural variation of processes. Nevertheless, their performance shows to be hindered by the current theoretical approach for updating the control limits, which tends to greatly underestimate them. To address this issue tuned control limits are encouraged. Based on these results suggestions are made to accommodate the observed performance mismatches and to improve model parameterization.

    References:
    1. Jackson, Technometrics, 1959. 1(4): p. 359-377.
    2. Jackson, et al., Technometrics, 1979. 21(3): p. 341-349.
    3. Ku, et al., Chemometrics and Intelligent Laboratory Systems, 1995. 30(1): p. 179-196.
    4. Li, et al., Journal of Process Control, 2000. 10: p. 471-486.
    5. Wang, et al., Industrial & Engineering Chemistry Research, 2005. 44(15): p. 5691-5702.
  • Quality Monitoring of Chenille Yarns Using Image Processing and Statistical Process Control

    Authors: Maroš Tunák (Technical University of Liberec), Vladimír Bajzík (Technical University of Liberec), Jan Picek (Technical University of Liberec), Jakub Kolář (Technical University of Liberec)
    Primary area of focus / application: Process
    Keywords: Chenille Yarns, Image processing, Control charts, Defect detection
    Submitted at 2-Jun-2015 09:30 by Maros Tunak
    Accepted
    7-Sep-2015 10:40 Quality Monitoring of Chenille Yarns Using Image Processing and Statistical Process Control
    This paper is dealing with quality monitoring of chenille yarns and detection of potential various defects occurring in yarns with the aid of image analysis and statistical process control. In the textile industry chenille yarn belongs amongst the more “fancy” yarns, having a very soft, silky feel and gloss appearance. In the production of yarn different kinds of defects can be encountered which affect the overall appearance of the yarn. In this contribution the possibility to implement a Shewhart control charts for detecting defects and quality control of chenille yarns from the image data is investigated. Grey level images of chenille yarns were captured and stored as an image matrix. Image pre-processing was applied and this involves thresholding to a binary image and a morphological opening operation for removing small objects from the image. The height of the pile yarn measured from the processed images was selected as to be monitored quality characteristic and the frequency of defects and the periods between them was counted. Two types of yarn were tested, and two samples of each type were taken. One sample was taken from the beginning, when the defects started to occur and the second was selected from the end, where a substantially larger number of the defects were encountered. The second sample is a twenty-four hours older than the first one. It was shown that for modelling purposes a Poisson and exponential distribution were appropriate. We provided a method of optimizing the monitoring of defects so that it is not necessary to control the entire yarn production, but only a section at a particular time. The results show that we can predict the change of cutting knife affected the quality of chenille yarns.
  • Fractional Factorial Designs by Combining Two-Level Designs

    Authors: Alan Vázquez-Alcocer (University of Antwerp), Peter Goos (University of Antwerp), Eric Schoen (University of Antwerp)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Generalized resolution, Local search, Nonregular design, Orthogonal array, Weak minimum aberration
    Submitted at 3-Jun-2015 12:29 by Alan Vazquez Alcocer
    Accepted (view paper)
    7-Sep-2015 16:15 Fractional Factorial Designs by Combining Two-Level Designs
    Many experiments in manufacturing, pharmaceutical, and chemical industries require designs for situations where a considerable number of factors as well as two-factor interactions are expected to be active. Often, a two-level fractional factorial design based on an orthogonal array of strength three is considered appropriate for this scenario because these designs have resolution greater than or equal to four. For run sizes less than 48 and up to 24 factors, a suitable design can be selected from complete catalogs of strength-three orthogonal arrays that are available in the literature. However, catalogs for larger run sizes that allow the study of more factors, reduce the aliasing between the effects of interest, and increase the number of estimable effects, are still incomplete. We study the construction of two-level fractional factorial designs from two orthogonal arrays of strength three. The resulting combined designs all have a resolution greater than four. We present an algorithm to search for the best subsets of columns to fold over and to test column permutations when combining the designs so as to minimize the aliasing among pairs of two-factor interactions. We propose new two-level fractional factorial designs with 64, 80, 96, 112 and 128 runs, and up to 33 factors, we evaluate their statistical properties, and compare them to the best alternatives in the literature.
  • Advanced Bayesian Estimation of Weibull Decreasing Failure Rate Distributions for Semiconductor Burn-In

    Authors: Daniel Kurz (Department of Statistics, Alpen-Adria University of Klagenfurt), Horst Lewitschnig (Infineon Technologies Austria AG), Jürgen Pilz (Department of Statistics, Alpen-Adria University of Klagenfurt)
    Primary area of focus / application: Reliability
    Keywords: Bayes, Burn-in, Decreasing failure rate, Interval censoring, Weibull distribution
    Submitted at 8-Jun-2015 09:12 by Daniel Kurz
    Accepted
    7-Sep-2015 15:35 Advanced Bayesian Estimation of Weibull Decreasing Failure Rate Distributions for Semiconductor Burn-In
    In semiconductor manufacturing, burn-in testing is applied to screen out early life failures before delivery. This is done by exposing the produced devices to accelerated voltage and temperature stress conditions for a specific period of time. The duration of the burn-in stress can then be assessed on the basis of the lifetime distribution of early failures, which is typically modeled using a Weibull distribution Wb(a,b) with scale parameter a>0 and shape parameter 0<b<1. This is motivated by a decreasing failure rate within the devices' early life. Depending on the burn-in strategy, the parameters of the Weibull decreasing failure rate distribution have then to be estimated from (censored) time-to-failure and interval-censored failure data, respectively.

    In this talk, we present two advanced Bayesian approaches for assessing the Weibull lifetime distribution of early failures. At first, we make use of a gamma-histogram-beta prior leading us to a closed-form solution for the posterior distribution of (a,b). Afterwards, we propose a new Bayesian estimation method for the Weibull distribution (under a decreasing failure rate), in which we assign a Dirichlet prior distribution to the cumulative distribution function of the lifetime of early failures. This method can be extended to both interval-censored failure and time-to-failure burn-in data. Moreover, this method more easily allows us to bring in (informative) engineering prior knowledge when estimating the Weibull lifetime distribution of early failures and enables us to update the burn-in settings (e.g. burn-in time, read-out intervals, etc.) whenever new interval-censored failure data become available.

    Finally, we investigate the estimation performance of the proposed methods in comparison to classical maximum-likelihood estimation in case of time-to-failure data. Focusing on estimation accuracy and coverage frequency, it turns out these methods perform better in comparison to classical maximum-likelihood estimation, especially in case of a small number of failures.

    Acknowledgment:
    The work has been performed in the project EPT300, co-funded by grants from Austria, Germany, Italy, The Netherlands and the ENIAC Joint Undertaking. This project is co-funded within the programme "Forschung, Innovation und Technologie für Informationstechnologie" by the Austrian Ministry for Transport, Innovation and Technology.
  • CoDa Hands-On Session

    Authors: Marina Vives-Mestres (Universitat de Girona), Josep-Antoni Martín-Fernández (Universitat de Girona)
    Primary area of focus / application: Other: CoDa
    Keywords: Compositional Data Analysis, Log ratio coordinates, Traffic management, Hands-on session
    Submitted at 9-Jun-2015 13:49 by Marina Vives-Mestres
    Accepted
    7-Sep-2015 15:35 Special Session: CoDa Hands-on Session
    Marylebone Road is a congested six lane east/west trunk route in Central London. Hourly traffic count data is obtained from an induction loop system fixed within the road surface. The system monitors vehicle number, type (by axle length) and speed for each of the six lanes.

    On August 2001, the nearside lane in each direction was designated a bus lane where only buses and taxis were allowed to enter. The bus lane is permanently in operation 24 h a day, 7 days a week and is strictly observed due to a system of enforcement cameras and automatic fines.

    By analising the data from year 2000 to 2002, we can identify a significant decrease on the daily total number of vehicles travelling past the vehicle monitors. But how is the composition of the traffic affected by this traffic management scheme?

    We use this real example dataset to show an application of Compositional Data (CoDa) analysis. By the use of some printed outputs and specific questions, participants will be invited to discuss, choose and build a reasoning like in a puzzle game. Come and enjoy the session!
  • Risk Estimation by SPN Models in Process Industries

    Authors: Ondřej Grunt (Vysoká škola báňská – Technická univerzita Ostrava), Radim Briš (Vysoká škola báňská – Technická univerzita Ostrava)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Reliability
    Keywords: Petri Nets, Hydrocarbon leak, Risk modeling, Event tree, Process industry
    Submitted at 10-Jun-2015 21:27 by Ondřej Grunt
    Accepted (view paper)
    8-Sep-2015 17:20 Risk Estimation by SPN Models in Process Industries
    Steady-state methods, such as event trees, are often used for modeling the risk to safety of personnel in process industries. However, as these processes are often time-dependent, more suitable methods capable of modeling dynamic events are required for accurate risk estimation. One such method is represented by the Petri Nets modeling language and its extensions. This article is an extension of the author's previous work, in which Stochastic Petri Nets were used to create a model of small hydrocarbon leak incidents on an offshore hydrocarbon production facility, based on realistic data from the offshore industry. In this work, medium-leak and large-leak data were used for the creation of additional SPN models of hydrocarbon leak incidents. Probabilities of fatality to personnel following hydrocarbon leaks were computed using the Petri Nets module of the GRIF software. These results were then compared with results obtained by event tree and Monte Carlo methods. It is shown that, similarly to the small-leak incident model, the most contributing event to fatalities of personnel is delayed ignition of a hydrocarbon leak.