ENBIS-16 in Sheffield

11 – 15 September 2016; Sheffield Abstract submission: 20 March – 4 July 2016

My abstracts

 

The following abstracts have been accepted for this event:

  • A New Control Chart for Monitoring the Parameters of a Zero-Inflated Poisson Process

    Authors: Athanasios Rakitzis (University of Aegean), Amitava Mukherjee (XLRI-Xavier School of Management)
    Primary area of focus / application: Process
    Secondary area of focus / application: Process
    Keywords: Average run length, Maximum likelihood estimation, Statistical Process Control, Wald statistic, Zero-inflated Poisson distribution
    Submitted at 18-Apr-2016 15:07 by Athanasios Rakitzis
    Accepted
    The zero-inflated Poisson (ZIP) distribution is considered as one of the most appropriate models for overdispersed data with an excessive number of zeros. Data of this type frequently arise in industrial and non-industrial processes, which are characterized by a low fraction of non-conforming items. A ZIP model has two parameters; the first one is the probability of extra zeros and the second one is the mean of the ordinary Poisson distribution. In this work, we propose and study a self-starting control chart that it is suitable for detecting changes in either of the two parameters of a ZIP process. The performance of the scheme is studied via simulation. The results reveal that the proposed scheme demonstrates an improved performance in the detection of small and moderate shifts in process parameters. Finally, a real-data example is also discussed.
  • Random Number Generation for a Survival Bivariate Weibull Distribution

    Authors: Mario César Jaramillo Elorza (Universidad Nacional de Colombia sede Medellín), Osnamir Elias Bru Cordero (Universidad Nacional de Colombia), Sergio Yañez Canal (Universidad Nacional de Colombia)
    Primary area of focus / application: Modelling
    Keywords: Bivariate Weibull, Gumbel-Hougaard copula, survival copula, CD-vines
    Submitted at 20-Apr-2016 21:17 by Mario César Jaramillo Elorza
    Accepted
    12-Sep-2016 10:40 Random Number Generation for a Survival Bivariate Weibull Distribution
    A bivariate survival function of Weibull distribution is presented as Model VI(a)-5 by Murthy, Xie and Jiang. The bivariate Weibull distribution is very important in both reliability and survival analysis. The dependence for these kind of problems has been gaining great importance in recent years. It is shown that the model corresponds to a Gumbel-Hougaard survival copula evaluated at two Weibull survival marginal. Their properties are studied to compare three method of random generation from that distribution (Method 1 Frees & Valdez, Method 2 Nelsen and Method 3 CD-Vines). The CD-Vines methodology is used as the base reference for the purpose of methodology evaluation.
  • ISO 13053 and ISO 18404 - Have You Read these Standards?

    Authors: Jonathan Smyth-Renshaw (Jonathan Smyth-Renshaw & Associates Ltd)
    Primary area of focus / application: Business
    Keywords: ISO 13053, ISO 18404, TPM, Statistics in business
    Submitted at 21-Apr-2016 00:00 by Jonathan Smyth-Renshaw
    Accepted
    ISO 18404 was issued this year and is linked to ISO 13053. These standards need to be discussed in the ENBIS arena, as ISO 13053 has only minimum guidelines for techniques which as a group we know have great benefits to end users. ISO 18404 is aimed at Six Sigma and Lean techniques yet TPM (Total Productive Maintenance) an important element is missing. Why? I think Six Sigma, Lean and TPM are very important to business success and these standards have not enhanced the development of these techniques and thinking. As a group, perhaps, it is time that ENBIS takes the lead on the use of statistical techniques in business, and develop a standard which business will use and see the value of statistics in business.
  • QFD: An Effective Approach to Identify Factors for Service Simulation Experiments

    Authors: Shuki Dror (ORT Braude College)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Business
    Keywords: QFD, DOE, Service, Simulation
    Submitted at 24-Apr-2016 18:00 by Shuki Dror
    Accepted
    12-Sep-2016 10:00 QFD: An Effective Approach to Identify Factors for Service Simulation Experiments
    Service design is a form of conceptual design, which involves the activity of planning and organizing people, infrastructure, communication and material components of a service in order to improve its performances. A simulation experiment models various scenarios of a service system. One of the aims of the designer is selecting appropriate factors for the determination of the simulation scenarios. The complete set of possible scenarios is huge and it’s often useful to get subjective input to help screen out some vital factors. Taguchi advocates a three-stage design procedure for off-line quality control: (i) system design; (ii) parameter design; and (iii) tolerance design. In the parameter design stage, which is the key stage in Taguchi method, factors affecting the performance of Y are categorized as controllable factors and noise factors. In this paper a Quality Function Deployment (QFD) matrix highlights controllable factors and noise factors to be considered when running a simulation experiment for a service system. It’s assumed that interactions between the factors might be changed for each performance measure, i.e., several roofs, corresponding to the number of rows in the QFD matrix. The MSE criterion is utilized here for selecting the vital service factors to be examined in the simulation experiments.
  • Clustering Variables Based on a Dynamic Mixed Criteria: Application to the Energy Management

    Authors: Christian Derquenne (EDF R&D)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Business
    Keywords: Variables clustering, Correlation, Unidimensionality, Unsupervised learning, Time series, Energy management
    Submitted at 25-Apr-2016 10:44 by Christian Derquenne
    Accepted
    13-Sep-2016 11:40 Clustering Variables Based on a Dynamic Mixed Criteria: Application to the Energy Management
    The research structures in the data is an essential aid to understand the phenomena to be analyzed before any further treatment. Unsupervised learning and visualization techniques are the main tools to facilitate this research facilities. We offer a set of methods for clustering numeric variables. These are based on a mixed approach: correlation between the initial variables and one-dimensionality of the resulting groups to dynamically build a typology by controlling the number of classes and quality. It allows primarily to "discover" an "optimal" number of classes without fixing it a priori. We introduce an approach for distributing the mixed criterion. We evaluate our approach on simulated data sets and compared to a method using PCR by oblique rotation (VARCLUS SAS software) and the criterion of dissimilarity Ward on the correlation matrix. The results show a significant gain in our approach to VARCLUS and Ward, in terms of detecting the number of classes and fitness content groups from the observed typology. Then, as part of energy management, we built time series typologies in the areas of market prices and electricity consumption. The characterization of each group of curves obtained allowed to identify and understand the behavior of the joint evolution of the phenomena studied and to detect differences in behavior between clusters. Finally, we discuss the contributions and limitations of our approach, and propose improvements and future directions, including the problems of non-linearity between variables, missing data, presence of outliers and large on individuals, and variables.
  • Failure Probability Estimation for Semiconductor Burn-In Studies Considering Synergies between Different Chip Technologies

    Authors: Daniel Kurz (Department of Statistics, Alpen-Adria University of Klagenfurt), Horst Lewitschnig (Infineon Technologies Austria AG), Jürgen Pilz (Department of Statistics, Alpen-Adria University of Klagenfurt)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Quality
    Keywords: Bayes, Binomial distribution, Burn-in, Clopper-Pearson, Serial system reliability
    Submitted at 26-Apr-2016 10:48 by Daniel Kurz
    Accepted
    13-Sep-2016 15:10 Failure Probability Estimation for Semiconductor Burn-In Studies Considering Synergies between Different Chip Technologies
    In semiconductor manufacturing, burn-in (BI) is applied to simulate the early life of the manufactured devices. This is done by operating the final chips under accelerated voltage and temperature stress conditions. In this way, early failures can be detected and weeded out.

    To reduce the efforts associated with BI, semiconductor manufacturers aim at evaluating the failure probability p of the devices in their early life. This is achieved by means of a BI study, in which a large random sample of devices (>100k) is investigated for early failures after the BI. Based on the number of relevant failures, an upper bound for p can then be assessed, typically by using the exact Clopper-Pearson approach. As soon as this upper bound is below the predefined ppm-target, BI can be released.

    In this talk, we show how to improve the estimation of early life failure probabilities for semiconductor BI studies by considering synergies (e.g. comparable chip layers) between different chip technologies. In other words, we partition the devices into disjunctive subsets and take into account additional information for the subsets being available from BI studies of related technologies.

    From a statistical point of view, this requires deriving an upper bound for p from binomial subset data. To be consistent with the exact Clopper-Pearson approach, we i) compute the probability distribution for the number of failed devices, which might be randomly assembled from the failed subsets, and ii) infer the upper bound for p from a beta mixture distribution. Following a Bayesian approach, however, the upper bound for p is derived from the posterior distribution of p under negative-log-gamma prior distributions for the subset failure probabilities (in order to obtain a uniform prior distribution for p).

    Finally, we show that, by considering synergies to already tested technologies, the total sample size of BI studies can be essentially reduced, especially in case of failures. Moreover, we indicate how we can further reduce the efforts of BI studies by additionally taking account of countermeasures implemented in the chip production process.

    Acknowledgments

    The work has been performed in the project EPT300, co-funded by grants from Austria, Germany, Italy, The Netherlands and the ENIAC Joint Undertaking. This project is co-funded within the programme "Forschung, Innovation und Technologie für Informationstechnologie" by the Austrian Ministry for Transport, Innovation and Technology.