ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts


The following abstracts have been accepted for this event:

  • Resilience Multi-Parameter Estimation

    Authors: Alistair Forbes (National Physical Laboratory)
    Primary area of focus / application: Metrology & measurement systems analysis
    Secondary area of focus / application: Other: IMEKO TC21 Session organised by Franco Pavese
    Keywords: Estimation, Metrology, Resilience, Uncertainty
    Submitted at 30-Apr-2015 13:17 by Alistair Forbes
    7-Sep-2015 12:00 Resilience Multi-Parameter Estimation
    Many experiments in metrology are aimed at measuring a single quantity, such as the length of a gauge block or the mass of an artefact. For these types of measurements, instruments are designed so that the response of the instrument is linked strongly to the variable of interest and largely unaffected by other influence factors. If we wish to measure a number of parameters associated with a system, then we can employ a similar strategy and use a number of instruments, each dedicated to estimating one of the parameters of the system, with minimal cross talk. For many practical measurements, eliminating cross talk completely is not possible. For example, in measuring air quality parameters using electro-chemical sensors, it is usual for the response to one chemical species to affect the response to another, and most sensors are influenced by temperature and humidity. Another example is in the determining values for the fundamental constants of physics, in which the most accurate experiments estimate a nonlinear function of a small number of constants, rather than a single constant. The fact that instrument responses depend on a subset of the parameters of interest can in fact be regarded as a good thing as it enables more resilient systems to be designed, systems that can cope better if one instrument fails. This paper will discuss measures of resilience associated with multi-parameter estimation and algorithms that can be used to analyse the data associated with resilient systems.
  • Development of an Open Application for Teaching Statistics: Conception and Results

    Authors: Lluis Marco-Almagro (UPC Universitat Politécnica de Catalunya, Barcelona Tech), Eduard Serrahima (UPC Universitat Politécnica de Catalunya, Barcelona Tech), Xavier Tort-Martorell (UPC Universitat Politécnica de Catalunya, Barcelona Tech), Pere Grima (UPC Universitat Politécnica de Catalunya, Barcelona Tech), Lourdes Rodero (UPC Universitat Politécnica de Catalunya, Barcelona Tech)
    Primary area of focus / application: Education & Thinking
    Keywords: Statistics teaching, Data analysis, Data visualization, Free software, Project-based learning
    Submitted at 30-Apr-2015 13:28 by Lluis Marco-Almagro
    Accepted (view paper)
    7-Sep-2015 17:00 Development of an Open Application for Teaching Statistics: Conception and Results
    Different commercial statistical software packages exist that are appropriate for teaching statistics. However, free alternatives are scarce, and those existing suffer from problems (such as difficult installations and bizarre interfaces) that make them not very usable. This paper presents the conception and implementation of a new application for statistical analysis with the following characteristics: free, simple to use, multi-platform, scalable and with a focus on statistical concepts and ideas.

    The whole development cycle will be explained in the presentation: first list of user requirements, design decisions, implementation and testing. The application is developed in R language and uses the shiny package for the user interface. Although the application obviously allows the analysis of real data through graphs and statistical methods, its main objective is facilitating the acquisition of statistical concepts. To accomplish this, menus, configuration options and results are presented in a way that fosters reflection on basic statistical ideas. The application has a special emphasis on industrial statistics.

    Initial feedback from users testing the application will be exposed, and ways to freely access and use the application will be presented.
  • Fostering Diversity in Measurement Science

    Authors: Franco Pavese (formerly INRIM, Torino)
    Primary area of focus / application: Metrology & measurement systems analysis
    Secondary area of focus / application: Consulting
    Keywords: Truth, Certainty, Uncertainty, Probability, Risk, Single thought, Diversity
    Submitted at 30-Apr-2015 15:22 by Franco Pavese
    The contrast between single thought and diversity is long since inherent to the search for ‘truth’ in science—and beyond. This paper aims at summarizing the roadmap from certainty to uncertainty, chance, probability, decision taking and risk, as embedding reasons that indicate why scientists should be humble in contending about methods for expressing experimental knowledge, and for supporting instead diversity of methods. However, there must be reasons for the present trend toward selection of a single direction in thinking rather than using diversity as the approach to increase confidence that we are heading for correct answers: some examples are listed. Concern is expressed that this trend could lead to the consequence of hindering rather than promoting, scientific understanding.
  • MOOC and Teaching: An Inevitable Evolution for Lifelong Training?

    Authors: François Husson (Agrocampus Ouest), Magalie Houée-Bigot (Agrocampus Ouest)
    Primary area of focus / application: Other: Realization of MOOCs - technology, content and funding opportunities
    Keywords: MOOC, Teaching, Food industry, Lifelong training, Sensometry
    Submitted at 30-Apr-2015 15:52 by François Husson
    Accepted (view paper)
    8-Sep-2015 15:35 MOOC and Teaching: An Inevitable Evolution for Lifelong Training?
    Lifelong training becomes more and more important because few students will spend their whole life on the same job or using the same knowledge or expertise. This is a crucial issue for companies that requires finding partnerships on education.
    In this presentation, we will take the example of a MOOC in sensometry, ie on the statistical treatment of sensory data, to see how we imagine using MOOCs for lifelong training in food industry.
    Sensometrics is used in companies such as Nestlé, Danone, Coca Cola, Renault, EDF, etc. but also in smaller companies. This discipline uses well known statistical methods (design of experiments, analysis of variance, principal component analysis, regression, etc.) but on specific data which imposes to adapt the methodology (inclusion of carry-over effect in the experimental design, confidence ellipses in PCA, special use of regression in preference mapping, etc.). Data collections evolve which imply to propose new methods or methodologies. It is therefore necessary to have a lifelong training.
    We will show in this presentation how the MOOC was built and why it is adapted for learners in food industry as a lifelong training. We will also discuss some ways to adapt a MOOC at lower cost so that companies can have their specific training. Some activities (videos, quizzes, exercises, discussion forum) may be common to everybody, it would be the current MOOC, and others may be restricted to learners groups of a company.
    The MOOC promotes healthy competition between employees of a same company and, moreover, one of the main interest is that learners can return to the MOOC when they need it. This individual training is a guarantee of success because the willingness to learn is present. How many people who followed a training session did not apply what they learned immediately and forgot it when they needed it? Thanks to a MOOC, it is possible to come back to courses already followed, as if we had a remake of a training session done 2 years ago. It would be embarrassing to ask his boss to follow the same session training twice. With a MOOC, useless to ask, everything is available!
  • A MOOC for Everyone ... Including the Business World!

    Authors: Magalie Houée-Bigot (Agrocampus Ouest), François Husson (Agrocampus Ouest)
    Primary area of focus / application: Other: Presentation session on MOOCs
    Keywords: MOOC, Teaching, Exploratory multivariate data analysis, E-learning
    Submitted at 30-Apr-2015 15:58 by Magalie Houée-Bigot
    8-Sep-2015 10:30 A MOOC for Everyone ... Including the Business World!
    The MOOC "analyse de données multidimensionnelles" (exploratory multidimensional data analysis) took place in March 2015. This French MOOC was followed by more than 5000 participants from 93 countries. 72% of learners had a masters or doctoral level, which corresponded to the awaited target, and the average age of students was 38.2 years. Thus, most learners are workers. They followed the mooc as if they had attended a continuing training course.
    Learners work in research institutes (INRA, INSERM, IRD, etc.), universities (French but also Tunisian, Turkish, Zairian, etc.) but also in different industrial and business companies. Instead of organizing an educational session on exploratory data analysis, some companies invited their employees to enroll in our MOOC. Therefore the MOOC was used as a continuous training by some companies. A satisfaction survey highlighted that learners enjoyed the MOOC, the most enthusiastic learners being the oldest.
    During the presentation, we will provide answers to the following question: Why can a MOOC with a diverse audience, satisfy all the learners (students, teachers, industrialists)? Because the MOOC was thought to be attended in different ways according to the available time, because the mathematical formalism was reduced, because methods were explained and illustrated with numerous examples , etc. Because we also took care of the form: the videos, the sound were good, the scripts were available, the software was free and relevant, our presence in the discussion forum was permanent: in short, we took care of the learners.
    Finally, we will answer a second question: Does the diversity of the audience enrich a MOOC?
  • Impact of Autocorrelation on Principal Component Analysis

    Authors: Erik Vanhatalo (Luleå University of Technology), Murat Kulahci (Technical University of Denmark and Luleå University of Technology)
    Primary area of focus / application: Process
    Keywords: Principal Component Analysis (PCA), Autocorrelation, Statistical Process Control (SPC), Simulation
    Submitted at 30-Apr-2015 16:33 by Erik Vanhatalo
    7-Sep-2015 15:55 Impact of Autocorrelation on Principal Component Analysis
    The popularity of latent variable methods such as principal component analysis (PCA) keeps growing as the development of automated measurement systems increases the availability of multivariate data. In many applications of PCA the purpose is descriptive and by reducing the dimensions of the data into a few latent variables description and interpretation may be simplified. Dimensionality reduction is also important for inferential purposes, such as in statistical process control (SPC) where, in many applications, the large number of quality characteristics makes univariate monitoring ineffective and inefficient.

    One concern with automated data collection schemes is the increased frequency of sampling which inevitably introduces serial dependence (autocorrelation) in the collected data. Jollife (2002, p. 299) states that “when the main objective of PCA is descriptive, not inferential, complications such as non-independence does not seriously affect this objective.” If PCA is used within SPC, i.e. for inferential purposes and the scores on principal components are monitored, serial dependence in the original variables is expected to affect monitoring performance. This can be explained by the fact that the principal components are linear combinations of autocorrelated variables and hence also the principal component scores will be autocorrelated.

    Traditional SPC techniques assume independent data in time. However, this assumption is becoming increasingly unrealistic in today’s applications. The issue of autocorrelation in univariate SPC charts has been discussed at length in the literature. Clearly less research has been reported on the effects of and remedies for autocorrelation in multivariate SPC (MSPC) charts. Nevertheless, the potential solutions that emerge in the literature are: [1] to adjust the control limits of the charts by estimating the “true” process standard deviation, and [2] to use a residuals approach where a univariate or multivariate time series model is fitted to the data and univariate or MSPC control charts are applied to the residuals. A specific solution suggested for PCA is “dynamic PCA”: to apply PCA to a data matrix including time-lagged versions of the original variables, see Ku et al. (1995).

    Although potential solutions of the autocorrelation problem in MSPC and for PCA have been previously presented, it seems to us as though the impact of autocorrelation on PCA-based SPC is not well documented.

    The purpose of this paper is to investigate and illustrate the impact of autocorrelation on the descriptive ability of PCA as well as on the shift detection ability using PCA-based SPC. We illustrate the impact of autocorrelation on the descriptive ability of PCA by visualizing and discussing simulations of a bivariate case from a vector autoregressive model. Through further simulations we also show that the false alarm rate and shift detection ability for PCA-Based SPC may be substantially affected by autocorrelation.


    Jolliffe, IT. (2002). Principal Component Analysis (2nd ed.), Springer-Verlag: New York, NY.

    Ku, W, Storer, RH, Georgakis, C. (1995). Disturbance detection and Isolation by Dynamic Principal Component Analysis. Chemometrics and Intelligent Laboratory Systems 30: 179-196.