ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts

 

The following abstracts have been accepted for this event:

  • The Legacy of George Box and the Future of Industrial Statistics

    Authors: Geoff Vining (Virginia Tech)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Process
    Keywords: George Box, Experimental design, Time series, Process control
    Submitted at 29-May-2015 15:38 by Geoff Vining
    Accepted
    7-Sep-2015 14:30 George Box Award: Geoff Vining. Award talk on "The Legacy of George Box and the Future of Industrial Statistics"
    George Box was a true giant within statistics, especially within industrial statistics and quality engineering. He made truly significant contributions to experimentation, time series analysis, and process control.

    His presence within our field continues today and into the future. This talk briefly reviews some of Box’s seminal contributions to our field in the areas of experimental design, time series, and process control. It then projects how these contributions continue and evolve into the future. Specific topics include experimental design and analysis for reliability data and response surface methodology for functional data, such as profiles or time series. These developments essentially marry Box’s three areas in order to address important real industrial statistics problems we face today and into the near future.
  • Applied Statistics in Support of Cities Simulation: Some Examples and Perspectives

    Authors: Atom Mirakyan (EIFER), Alexandru Nichersu (EIFER), Alberto Pasanisi (EDF R&D - EIFER), Muhammad Saed (EIFER), Nico Schweiger (EIFER), Maria Sipowicz (EIFER), Jochen Wendel (EIFER)
    Primary area of focus / application: Other: SFdS
    Keywords: Cities, Forecast, Decision-aid, Scenarios, Clustering
    Submitted at 29-May-2015 18:13 by Alberto Pasanisi
    Accepted
    9-Sep-2015 11:10 Applied Statistics in Support of Cities Simulation: Some Examples and Perspectives
    Simulation is a powerful tool in urban planning to forecast short, mid and long term evolutions of key performance indicators of sustainable development. By means of a systemic approach tackling the city in its complexity, planners and decision-makers are able to analyze and compare the effect of different initiatives and choose the ones which allows to reach higher goals in terms of energy efficiency, greenhouse gases emissions, costs, eco-friendliness etc.

    This communication highlights some examples of research works concerned with application of statistical methods to this domain. Statistics can provide a valuable support at all the levels of simulation. On the one hand, it can allow to assess scenarios to be used as inputs of simulation. Actually, there are numerous driving forces or variables having large impact to the city or territory long range energy planning and modelling. The analysis and accurate forecasting of these drivers can have significant impact to the planning results and decision; in this framework composite forecast methods prove to be effective extrapolation tools.

    On the other hand, clustering methods provide powerful and intuitive tools to resume information, make it more easily understandable to architects, engineers and decision-makers and target areas (not necessarily administrative areas) and/or groups of buildings for local renovation initiatives.
  • A Non-Stationary Spatial Kriging Model

    Authors: Helmut Waldl (Johannes Kepler University Linz)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Non-stationary spatial model, Kriging, Design of Experiments, Covariance function
    Submitted at 30-May-2015 20:00 by Helmut Waldl
    Accepted
    7-Sep-2015 17:20 A Non-Stationary Spatial Kriging Model
    In many practical applications the data show strong evidence of a spatially non-stationary covariance structure. However practitioners mostly use a stationary spatial model which is a simplification and strong idealization of reality. The reason for this is that up to now only little is known about how to handle non-stationarity in practice and computation with non-stationary models might be challenging.

    If our task is the prediction of all realizations of a field on the basis of only a few measurements and if we have data with spatially varying variance, we should position our design points at locations with high variability. On the other hand a trade-off has to be made between greedy information hunting and non-neglecting large regions with low variation.

    Using a kriging model generalized for a non-stationary covariance structure this trade-off is made automatically if we use the kriging variance as design criterion. When repeated observations of the spatial process over time are available it is easy to incorporate non-stationarity in the model and the additional computational effort is negligible.

    A concluding computer simulation experiment based on data provided by the Belgian institute \emph{Management Unit of the North Sea Mathematical Models} compares the prediction performance of a standard stationary model with the performance of the directly generalized non-stationary model.
  • The Influence of Business Characteristics on Statistically Uncertain Election Outcomes

    Authors: Rowan Pritchett (Imperial college London)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Mining
    Keywords: UK, Election, Boundaries, Poll
    Submitted at 31-May-2015 14:36 by Rowan Pritchett
    Accepted
    7-Sep-2015 10:40 The Influence of Business Characteristics on Statistically Uncertain Election Outcomes
    All democratic elections are subject to considerable uncertainty. Not only do people change their minds unpredictably but also the translation of votes into power can vary on a knife edge. The recent UK general election was particularly difficult to call. The apparent trend from pollsters accelerated in the last few days resulting in a clear but small majority for the sitting party. There are many alternative voting systems but most are dependent on the location of constituency boundaries. In the UK these boundaries appear to be based on the location of historical industries. This poster examines the correlations between constituencies and various types of business and explores the influence of location characteristics on the statistical uncertainty of the electoral outcome.
  • Big Data in the Industry Sector: Perspectives & Examples

    Authors: Marco P. Seabra dos Reis (Department of Chemical Engineering, University of Coimbra)
    Primary area of focus / application: Process
    Secondary area of focus / application: Mining
    Keywords: Big Data, Process monitoring, Industry, Latent variable methods
    Submitted at 31-May-2015 23:23 by Marco P. Seabra dos Reis
    Accepted
    9-Sep-2015 09:20 Big Data in the Industry Sector: Perspectives & Examples
    The “Big Data” movement has been attracting an increasing interest from public and private organizations in the last five years. Independently of its meaning being exactly perceived or not, people are receptive to the “nxV’s” message, “Volume + Velocity + Variety + Veracity + Value + …”. Data scientists, big software providers and consulting companies have been assuming leadership roles in this process, establishing its agenda and priorities. Quite surprisingly, data-centric professionals such as Statisticians and Quality Engineers, have been adopting a not so active positioning in this phenomenon. One possible justification for this apparent contradiction relies on the fact that, maybe, “Big Data” is not a “new thing” but a “journey”. And for many of us working in the Industrial sector, this journey started in the early 80’s when distributed control systems began to be installed in a massive way across industry. Since them, a rich variety of methods and technology have been put forward to handle the challenges raised by the increasing volume, velocity, variety, …,of industrial data... So, when professionals like these look at all this movement on the use of data and the primacy of informative data visualization tools, they are often led to ask themselves, with a certain unstated suspicion, what is the real novelty there.
    On the other hand, it is also clear that there is currently a vibrating mix of conditions available out there, at the level of technology / databases / analytics, which can make this 30 years old endeavor more successful and efficient than any time before. In this communication we present a personal perspective of the Big Data movement from the standpoint of the industrial sector and present some examples of its implementation to address relevant problems in this field.
  • A Sequential and Pragmatic Approach to Component’s Design Optimization by Computer Experiments

    Authors: Pietro Tarantino (Tetra Pak packaging solutions s.p.a.), Armando Stagliano (Tetra Pak packaging solutions s.p.a.), Magnus Arner (Tetra Pak packaging solutions AB), Alberto Mameli (Tetra Pak packaging solutions s.p.a.)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Computer experiments, Kriging models, Sequential strategy, Optimization
    Submitted at 1-Jun-2015 10:04 by Pietro Tarantino
    Accepted
    8-Sep-2015 16:15 A Sequential and Pragmatic Approach to Component’s Design Optimization by Computer Experiments
    Experimentation is an integral part of any development process (Wu and Hamada, 2000). A big role in industrial experimental campaigns is still played by physical experiments. The nature of these experiments, e.g. costly, time-consuming and difficult to set, usually forces experimenter to test a few number of factors at few levels and to make sure a certain number of assumptions are satisfied, e.g. replication, randomization and blocking (Vicario, 2006). On the contrary, by using computer experiments, it is possible to relax many of these assumptions and to reduce, in general, the time and the cost of the study. Since the formal introduction of computer experiments by Sacks et al. (1989), substantial work has been done to make these experiments as efficient and effective as possible (Santner et al. 2003) and, as a consequence, more and more industrial studies are performed by replacing physical experimentation with a “virtual” one in which a computer runs a program that simulates the behaviour of the system of reference (Kai-Tai et al. 2006).
    There is no a unique approach to design and analysis of computer experiments (Chen et al. 2003). Traditionally, space filling or optimal designs have been used for exploring the design region while polynomial regressions and Kriging models have been extensively used to build the meta-model or emulator. Recently sequential strategies have been introduced with the aim of reducing the experimental effort while keeping the required accuracy from the experiment (Romano and Rocco, 2012). They consist in building a fairly accurate meta-model based on a low number of experimental points, and then adding new points in an iterative way by updating each time the meta-model according to a selected strategy like improving the accuracy of meta-model itself or finding the optimal design point in the design space.
    In this work, a hybrid approach is developed to achieve both meta-model accuracy and optimum design solution while keeping low the experimental effort (Bonte et al. 2010). The proposed methodology is applied to a practical and complex industrial case study.
    The pragmatism of such strategy, together with simplicity of implementation promotes the generalization of this approach to other industrial experiments.