ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • The Computerized Generation of Fractional-Replicate Designs Using Galois Fields and Hadamard Matrices

    Authors: Peter J. Zemroch (Shell Global Solutions)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Mining
    Keywords: Computerized experimental design generation, Galois field, Hadamard matrix, Fractional replicate, KEYFINDER.
    Submitted at 9-Jun-2014 14:30 by Peter Zemroch
    Accepted (view paper)
    24-Sep-2014 09:20 The Computerized Generation of Fractional-Replicate Designs Using Galois Fields and Hadamard Matrices
    Zemroch, Lunn, Baines and Clithero (1989) gave a set of algorithms for generating a wide range of blocked and fractional-replicate designs using design keys; all these designs have p^q units where p is prime. Many useful classes of design also exist with non-prime numbers of levels and the construction rules for most of these use Galois fields and Hadamard matrices in one way or another. This paper provides simplified algorithms to expedite the implementation of these structures in software. The use of Galois fields and Hadamard matrices in generating important design classes, such as the 2^k designs of Plackett and Burman (1946) and the s^k designs of Addelman and Kempthorne (1961), is then detailed. The two papers together give an arsenal of methods which can generate almost all, existent balanced fractional-replicate designs of sizes likely to be used in real-world experimentation. These methods have all been implemented in Version 3 of the KEYFINDER program, an overview of which may be found in Zemroch (1992).


    Addelman, S. and Kempthorne, O. (1961). Some main effect plans and orthogonal arrays of strength two. Ann. Math. Stat., 32, 1167-1176.
    Plackett, R.L. and Burman, J.P. (1946). The design of optimum multifactorial experiments. Biometrika, 33, 305-325.
    Zemroch, P.J., Lunn, K., Baines, A. and Clithero, D.T. (1989). Finding design keys using Prolog. Computational Statistics Quarterly, 4, 311-332.
    Zemroch, P.J. (1992). KEYFINDER - a complete toolkit for generating fractional-replicate and blocked factorial designs. In "Computational Statistics Volume 2: Proceedings of the 10th Symposium on Computational Statistics", eds. Y. Dodge and 1. Whittaker. Physica-Verlag, Heidelberg, 263-268.
  • The Models Really Screwed up? Or Factors Were Modified? An Analysis of the Effect of Changing the Rules on the Global Crisis

    Authors: Sandro Schmitz dos Santos (Dealers International Business)
    Primary area of focus / application: Economics
    Secondary area of focus / application: Finance
    Keywords: Statistical modeling, Econometrics, Finance, Risk management
    Submitted at 9-Jun-2014 17:59 by Sandro Schmitz dos Santos
    Accepted
    23-Sep-2014 14:20 The Models Really Screwed up? Or Factors Were Modified? An Analysis of the Effect of Changing the Rules on the Global Crisis
    The global crisis of 2008 brought severe doubts about the statistical modeling due to the fact that it failed to predict the crisis that came to pass. However, this work demonstrates that in the period immediately preceding the crisis the federal government of the United States, as well as several states of the country changed its laws changing dramatically important factors for risk management and that would be decisive in any statistical model that is willing to consider the risks of the securities available in the market. By modifying risk factors, governments have created a huge gray area where data were totally uncertain and imprecise analyzes made. Any charge made in this scenario was immoral and should be illegal because the data become totally unreliable due to legislative modification. This work aims to demonstrate how these changes led to a radical change of factors, causing a huge difficulty in measuring the analysts who did not have them at the time.
  • Improving Production Machines Performance and Productivity Online Using an Evolutionary Operation Based Approach

    Authors: Abdellatif Bey-Temsamani (FMTC), Maarten Witters (FMTC)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: EVOP, Online optimization, Industrial case study, Controller tuning, Mechatronics
    Submitted at 10-Jun-2014 17:58 by Abdellatif Bey-Temsamani
    Accepted
    24-Sep-2014 11:15 Improving Production Machines Performance and Productivity Online Using an Evolutionary Operation Based Approach
    This paper discusses industrial case studies in which a systematic method, based on Evolutionary Operation (EVOP), is applied to optimize on-line the controller parameters of mechatronic machines in order to improve their performance and productivity.
    Controller design for mechatronic systems, including industrial machines, is a challenging problem. Often, model-based design methods are applied. These methods regularly consist of three steps: a system identification, a controller synthesis and an experimentation step. During the latter, the designed controller is evaluated. As the complexity of mechatronic systems is ever growing; having multiple inputs and outputs and possibly exhibiting a complex; time-varying behaviour, for instance due to wear or caused by a varying temperature; controller design becomes a cumbersome and a time-consuming task. Furthermore, the retrieved controller can be sub-optimal since the selected parameters are calculated based on an approximate model, estimated during the identification step.
    To overcome these inconveniences, the authors adopted an online optimization method to fine-tune the controller parameters during regular operation of the system. The industrial deployment of this method is explained based on some industrial scale case studies.
    The main case study is a badminton robot, which has to perform a point-to-point motion in a fixed time interval. Two controllers have been synthesized using an approximate simulation model of the badminton robot: an energy-optimal controller and a time-optimal controller. The EVOP-method is applied to further fine-tune the parameters of both controllers on-line. The constrained nature of the problem, where energy needs to be minimized subject to a time constraint, was transformed to an unconstrained single-objective optimization using Derringer desirability functions. Two important observations were made: (1) the on-line optimization of the energy-optimal controller lowered its energy consumption by about 5% while keeping the precision constant, (2) the more stringent time-constraints implemented in desirability functions lead to a controller with maximum precision and a more than 50% lower energy consumption compared to the original time-optimal controller.
    The proposed method for on-line controller optimization is widely industrially applicable. The authors illustrate that the approach can also be used for vibration suppression in industrial machines and for throughput yield maximization in industrial processes.

    P.S. This submission describes the (industrial) application of the methodology presented in an abstract submitted by K. Rutten and B. De Ketaere about constraint optimization using EVOP. If possible, we would like to request that both contributions are scheduled sequentially in the same session. Thanks in advance
  • A Simulation Study on Multivariate Statistical Process Control for Mixed Data

    Authors: Rok Blagus (Institute for Biostatistics and Medical Informatics, University of Ljubljana), Gaj Vidmar (University Rehabilitation Institute, Ljubljana), Neža Majdič (University Rehabilitation Institute, Ljubljana)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: Statistical process control, Multivariate analysis, Mixed-type data, Gower's distance, Simulation, Health care quality
    Submitted at 11-Jun-2014 10:50 by Gaj Vidmar
    Accepted
    23-Sep-2014 16:00 A Simulation Study on Multivariate Statistical Process Control for Mixed Data
    Multivariate statistical process control (MV SPC) with mixed data (meaning that some of the variables describing the process are numeric and some are categorical) is a young field. The standard approach to MV SPC with numeric data is to construct a control chart based on the Hotelling's T-squared statistic. We review the possibilities for MV SPC with mixed data and identify three existing approaches: multivariate outlier detection for mixed data; dimensionality reduction (via principal component analysis, multidimensional scaling or independent component analysis) followed by a control chart (T-squared chart, multivariate EMWA chart or multivariate CUSUM chart); and measuring distances between mixed-data points using Gower's distance (Gower's dissimilarity coefficient, Gower's index or Gower's general coefficient of similarity) and then constructing either a T-squared chart, a D-squared chart (which is based on support vector data description, SVDD) or a K-squared chart (which is based on k-nearest neighbours data description, kNN). The latter approach has been pioneered by Tuerhong and Kim (from Korea University) since 2013. The control limits for the D-squared and K-squared charts are established with bootstrap; either distances from the entire phase-I sample (global variant) or just from the kNN (local variant) are considered. We conducted a simulation study to compare the Gower's distance approach with the alternatives. The categorical variables were coded as binary indicator variables. The simulations were design so that they resemble a planned application of MV SPC with mixed data for health-care quality monitoring in the field of medical rehabilitation. The results indicate that the Gower's distance approach improves as the number of categorical variable increases and that the local variant of the Gower's-distance based K squared chart outperforms the global variant.
  • A Reliability Analysis of Repairable Mechanical Valves from a Chemical Plant

    Authors: Pasquale Erto (University of Naples "Federico II"), Antonio Lepore (University of Naples "Federico II")
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Modelling
    Keywords: Failure mechanism, Intensity function, Model identification, Non homogeneous poisson process, Cramèr-von-Mises goodness-of-fit test
    Submitted at 12-Jun-2014 14:32 by Antonio Lepore
    Accepted
    22-Sep-2014 11:55 A Reliability Analysis of Repairable Mechanical Valves from a Chemical Plant
    In order to attain economy of scale, many chemical plants are planned and built to be very large and involve very complex control systems with several thousands of components. Among these, a vital role is played by numerous mechanical valves, which are not standardized since they must perform different functions in a very large range of working conditions. Therefore the large amount of mechanical valves and their critical “mission” give rise to a relevant reliability problem.
    The valve data analyzed in this paper are from a data collection activity carried out over a 3½ year period (1307 days) at a power station. The dataset contains 446 valves, 1188 event records and covers different type, manufacturer, sizes and media as well as a large range of operational pressure and temperature.
    However, their failure rates, which are commonly used, cannot take into account all the many different phenomena which can occur alternatively or concomitantly. In fact their working life is highly influenced by the specific given function and/or the real working conditions. So a practical approach, based on real failure data and specific technological and/or operational information, is proposed in this paper.
    The attained reliability model is derived for different valve diameters, which is confirmed to be the discriminant factor, and appear to be a good tools for practical applications and for setting up future research.
  • Update on the 5W&1H Problem Definition Method

    Authors: Jonathan Smyth-Renshaw (Jonathan Smyth-Renshaw & Associates Ltd)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Six Sigma
    Keywords: 5W & 1H method, Problem definition, Case studies, Benefits, Cost saving
    Submitted at 12-Jun-2014 21:33 by Jonathan Smyth-Renshaw
    Accepted (view paper)
    23-Sep-2014 16:40 Update on the 5W&1H Problem Definition Method
    For the last few years I have been teaching and using the 5W&1H method (where in the process, where on the product, what product/service, who created/found, when was the problem found/created, how - deviation from target/expectation/standard) to help clients define business problems in a structured framework. I wish to present the results from various clients which I have either taught the method or I have used the method in business improvement activities.

    It is important to note that the omission of the why question ensures that only the facts are used in the problem definition phase. My research has shown that this in turn leads to the clear and better problem definition and therefore more problems solved.

    A further development is that commonly used approaches like 5 whys and cause and effect matrix/diagrams become unnecessary in the problem solving process.

    The presentation will cover a brief summary of the 5W&1H method. I will also discuss the structure and how the method is taught to clients. This is followed by a number of Case Studies from both manufacturing and service sectors demonstrating the application of the method and the benefits/cost saving achieved by the user. There will be time for discussion of the results which I would value as this method is the central theme of my PhD research.