ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts

 

The following abstracts have been accepted for this event:

  • Latin Squares Design of a Freezing Method Experiment

    Authors: Froydis Bjerke (Animalia Meat and Poultry Research Centre)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Quality
    Keywords: Water holding capacity, Meat quality, Outliers, Frozen meat
    Submitted at 30-Apr-2015 16:39 by Froydis Bjerke
    Accepted
    8-Sep-2015 15:55 Latin Squares Design of a Freezing Method Experiment
    When freezing and thawing raw meat, the quality of the meat is reduced, compared to fresh (chilled, but not frozen) meat. In particular, the water holding capacity – measured by drip loss, is a measure for meat quality that is affected by freezing. It is assumed that the water in the meat form crystals that break the cell walls, increasing the leakage of liquids from the muscle cells after thawing. In addition, water holding capacity is a heritable trait and does vary between individuals.

    In order to compare different freezing methods, regarding drip loss, a Latin Squares Design was formed. The effects of method had to be separated from the effect of individuals, also considering that variations within an individual could be present.

    The presentation describes possible experimental designs for this purpose, as well as how the experiment was actually run. Also, the handling of outliers is discussed. The results indicated that the methods in question differed with respect to drip loss, in a way that supported a research proposal for improved frozen meat quality.
  • Model-Based Selection of Measuring Positions for Energy Transmission Systems

    Authors: Dirk Surmann (TU Dortmund, Fakultät Statistik)
    Primary area of focus / application: Modelling
    Keywords: Selection of measuring positions, Genetic algorithm, Random field metamodel, Energy transmission system
    Submitted at 30-Apr-2015 16:56 by Dirk Surmann
    Accepted
    7-Sep-2015 17:40 Model-Based Selection of Measuring Positions for Energy Transmission Systems
    The European electrical transmission system is working closely to its operational limits due to market integration, energy trading and the increasing feed-in by renewable energies. Therefore the system has become more vulnerable for disturbances in different areas, for example energy permanently oscillating with a low frequency. Analysing this Low Frequency Oscillation (LFO) requires measurements of voltage angle and magnitude at different positions (nodes) in the transmission system. Due to the fact that the considered system consists out of a large number of nodes, our aim is to derive a subset of nodes which contains sufficient information about the LFO. This subset is easier manageable than interrogating all nodes.

    In order to achieve our aim we derive a parameter set for the Low Frequency Oscillation based on differential equations which characterises every single measuring position or node, respectively. Via analysing the behaviour of each node with respect to its neighbours, we construct a feasible random field metamodel over the whole transmission system. The random field works in a discrete spatial domain with a non-isotropic distance function. We derive a statistic to evaluate the metamodel using information from a subset of measuring positions. Using a genetic algorithm we optimise the selected subset with respect to the target statistic. This results in a subset of nodes which contain the most information about the European electrical transmission system. The talk will describe the method and compare results from different energy transmission systems.
  • Functional Generalized Linear Models and Outlier Detection for an HVOF Spraying Process

    Authors: Sonja Kuhnt (Dortmund University of Applied Sciences and Arts), Andre Rehage (TU Dortmund University)
    Primary area of focus / application: Modelling
    Keywords: Outlier detection, Functional data, Generalized linear models, Thermal spraying
    Submitted at 30-Apr-2015 17:04 by Sonja Kuhnt
    Accepted
    7-Sep-2015 15:35 Functional Generalized Linear Models and Outlier Detection for an HVOF Spraying Process
    In industry, thermal spraying processes are becoming increasingly popular to apply a coating to a surface, e.g. for improving wear protection of machine components in various fields of industry. An interesting technique in this framework is the high velocity oxygen fuel (HVOF) spraying: Here, a powder is fed into a jet, gets accelerated and heated up by means of a mixture of oxygen and fuel, and finally deposits as the coating upon the substrate. Due to advanced measurement technology, the properties of the powder particles in flight are recorded over time, such that they can be regarded as functional data. These properties contain particle temperature and velocity, among others. To improve functional generalized linear models for the HVOF spraying process, we aim to identify functional outliers of the particles in flight. Functional outliers can be divided in at least two different types: Magnitude outliers and shape outliers. For each of the categories, functional data depths are needed that assign an index determining the outlyingness of each observation. We look at the modified band depth (magnitude) and our recently developed functional tangential angle depth (shape). It is of interest in which way the different outlier types affect the results of the functional model and how outlier detection can be performed to improve the reproducibility and predictability of the coating properties.
  • Ensuring Traceability in Data Products

    Authors: Alistair Forbes (National Physical Laboratory), Maurice Cox (National Physical Laboratory), Peter Harris (National Physical Laboratory), Keith Lines (National Physical Laboratory), Ian Smith (National Physical Laboratory)
    Primary area of focus / application: Metrology & measurement systems analysis
    Secondary area of focus / application: Other: Session on Standards organised by Rainer Goeb
    Keywords: Data, Software, Standards, Traceability
    Submitted at 30-Apr-2015 17:45 by Alistair Forbes
    Accepted
    9-Sep-2015 11:10 Ensuring Traceability in Data Products
    Data transformation generally involves converting raw or low level data into higher level data products (or knowledge) that is then used for inferences and decision making. This transformation is almost always performed using software and the integrity of the data products depends directly on the integrity of the software performing the data transformation as well as on the integrity of the lower level data. For quality assurance purposes, compliance with standards, regulations and directives, legal liability, etc., it is usually the case that we have to be able to show how the data products have been constructed from the lower level data. The issue the provenance is becoming particularly relevant to the Big Data agenda in which statistical learning algorithms are used to create the data products. While the underlying design of the statistical learning algorithms may be explicitly known, often their implementations depend on a number of tuning parameters that are set deep within the code and hidden from the general user. This makes recreating a data product from raw sources difficult. This paper looks at two initiatives that are aimed at ensuring traceability in data products. The first relates to an ISO technical report, ISO/TR 13509:2012, on Guidance on the development and use of ISO statistical publications supported by software. The second relates to the outputs of a European project on Traceability for computationally-intensive metrology. The aim of this project is to provide a methodology to make traceable data products derived from measurement data in a way analogous establishing traceability for physical artefacts. We also discuss the role of standards in ensuring traceability for data products.
  • Sensitivity and Robustness of Designs for Repeated Measures Accelerated Degradation Tests

    Authors: Nikolaus Haselgruber (CIS Consulting in Industrial Statistics GmbH), Ernst Stadlober (TU Graz)
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Modelling
    Keywords: Design of Experiments, Lifetime distribution, Mixed effects model, Accelerated degradation testing, Sensitivity, Robustness
    Submitted at 30-Apr-2015 22:13 by Nikolaus Haselgruber
    Accepted (view paper)
    7-Sep-2015 10:40 Sensitivity and Robustness of Designs for Repeated Measures Accelerated Degradation Tests
    Accelerated degradation tests are applied when the response variable describes a degradation phenomenon. This is typically a property of an item of interest which monotonously changes over time, e.g. a resistance in an electric circuit. Item’s end of life is reached as soon as the degradation exceeds a defined limit. Repeated measures are required to determine the degradation process. Usually, this process depends on one or more stress variables, such as temperature, mechanical load etc. Here, an interesting problem for optimal design is the estimation of a quantile of the lifetime distribution under a given reference stress level.
    B. Weaver and W.Q. Meeker proposed in [1] a method for optimal design of repeated measures accelerated degradation tests based on mixed-effects models. In the course of this presentation, some interesting results concerning robustness and sensitivity of such designs extracted out of [2] will be shown.
    References:
    [1] B. Weaver and W.Q. Meeker (2014): Methods for planning repeated measures
    accelerated degradation tests. Applied Stochastic Models in Business and Industry 2014 / 30, 658-71.
    [2] N. Haselgruber and E. Stadlober (2014): Discussion of ‘Methods for planning repeated
    measures accelerated degradation tests’. Applied Stochastic Models in Business and Industry 2014 / 30, 680-85.
  • Diagnostic Performance of Automated Imaging Technologies for Identifying People with Glaucoma - The Glaucoma Automated Test (GATE) Study

    Authors: Charles Boachie (University of Glasgow), Azuara-Blanco A (Queen's University, Belfast), Banister Katie (University of Aberdeen), McMeekin Peter (Newcastle University), Gray J (Northumbria University), Burr Jennifer (University of St. Andrews), Bourne R (Anglia Ruskin University), Garway-Heath D (UCL Institute of Ophthalmology), Batterbury M (Royal Liverpool and Broadgreen, University Hospitals NHS Trust), Hernández R (University of Aberdeen), McPherson G (University of Aberdeen), Ramsay C (University of Aberdeen), Cook J (University of Oxford)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Glaucoma, Diagnostic, ROC, AUC
    Submitted at 30-Apr-2015 23:34 by Charles Boachie
    Accepted
    8-Sep-2015 15:35 Diagnostic Performance of Automated Imaging Technologies for Identifying People with Glaucoma - The Glaucoma Automated Test (GATE) Study
    Many glaucoma referrals from the community to hospital eye services (HES) are unnecessary. Imaging technologies can potentially be useful to triage this population. We conducted a study to assess the diagnostic performance and cost effectiveness of imaging technologies as triage tests, for identifying people with glaucoma. Participants in the study were adults referred from the community to HES for possible glaucoma in secondary care setting in the UK. The interventions were the Heidelberg Retinal Tomogram (including two diagnostic algorithms, HRT-GPS and HRT-MRA), scanning laser polarimetry (GDx), and optical coherence tomography (OCT). The reference standard was clinical examination by a consultant ophthalmologist with glaucoma expertise including visual field testing and intraocular pressure measurement (IOP).
    In this talk I will focus on the statistical methods used to evaluate the diagnostic performance of medical device particularly, the use of Receiver Operating Characteristic (ROC) curves. I will use simulated data to explain how to produce ROC curves and calculate the Area Under the ROC curve (AUC). I will also use the GATE data to demonstrate how the three automated technologies compared. I will describe the study design, the results and share the conclusions.