ENBIS-14 in Linz

21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014

My abstracts

 

The following abstracts have been accepted for this event:

  • Planning Efficient Paths for Spatial Field Observation by an Autonomous Agent

    Authors: Carolina Sotto (CNRS - UNSA), Maria-Joao Rendas (CNRS - UNSA)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Experimental designs, Spatial observation, IMSE
    Submitted at 23-Jun-2014 13:02 by Maria Joao Rendas
    Accepted (view paper)
    22-Sep-2014 11:25 Planning Efficient Paths for Spatial Field Observation by an Autonomous Agent
    We address the problem of experimental design for spatial field prediction using an autonomous observer with limited on-board energy. This constrains designs to be the periodic sampling of a 1D-curve of pre-specified maximum length. Motivation comes from observation of algal blooms in water basins, in the framework of the EU project DRONIC (dronicproject.com).

    Observation of natural fields traditionally resorts to networks of fixed instrumented buoys or dedicated campaigns, the corresponding designs establishing an heuristic compromise between knowledge about the observed fields, budget, and other operational limitations. Use of autonomous instrumented platforms improves maneuverability and allows all-time operation.

    We concentrate on the problem of interpolating the acquired measures over a region of interest A, and the problem is addressed in the framework of Kriging techniques, the efficiency of an observation path being measured by the predicted IMSE. Choice of the (unconstrained) IMSE-best design of size N is an NP-hard problem. For stationary GP models and under weak measures correlation, space filling designs can been shown to near-optimal. A number of authors have addressed determination of optimal designs for Kriging with stationary covariance models, for which greedy approaches identify close-to-optimal solutions, but less attention has been devoted to find optimal observation paths. An interesting approach has been presented in [1], considering optimization of the mutual information between sampled points and the non-sampled (grid) positions, relying on a spatial decomposition of the region of interest based on the assumption of an isotropic decrease of correlation with distance.

    In this presentation we present a stochastic search algorithm that finds near-optimal (IMSE) observation paths for regions of arbitrary regions, for generic non-stationary correlation models, identified from mathematical models or from historical data. The algorithm can be tuned to trade optimality of the resulting design and numerical complexity, while use of a spectral approach [2] to compute the IMSE significantly decreases the execution time compared to usual methods.
    Results are illustrated by application to realistic datasets produced by the biogeochemical model MIRO&CO for the Southern Bight of the North Sea [3]. The efficiency of the resulting constrained designs is compared to use of simple greedy approaches. Our numerical experiments also reveal the impact of modelisation of the field’s non-stationarity.

    [1] A. Singh, A. Krauser, C. Guestrin, W. Kaiser, Efficient Informative Sensing Using Multiple Robots, J. Artificial Intelligence, Research, 34 (2009), 707-755.

    [2] B. Gauthier, L. Pronzato, J. Rendas, An alternative for the computation of IMSE-optimal designs of experiments, Book of Abstracts of the Seventh International Workshop on Simulation, with L. Pronzato and J. Rendas (2013).

    [3] Lacroix, G. and Ruddick, K. and Park, Y. and Gypens, N. and Lancelot, C. (2007). Validation of the 3D biogeochemical model MIRO&CO with field nutrient and phytoplankton data and MERIS-derived surface chlorophyll $_{\alpha}$ images. Journal of Marine Systems 64, 66-88.
  • Describing Multiple Normal Operating States in Continuous Chemical Processes

    Authors: Gustavo Matheus de Almeida (Federal Univ. of Sao Joao del-Rei), Cássia Regina Santos Nunes Almeida (Federal Univ. of Sao Joao del-Rei), Song Won Park (University of Sao Paulo)
    Primary area of focus / application: Process
    Keywords: Multiple normal operating states, Hidden Markov model, Process monitoring, False alarm rate, Fault detection, Continuous chemical process, Heat exchanger
    Submitted at 23-Jun-2014 13:07 by Gustavo Almeida
    Accepted
    Continuous chemical processes are characterized by having multiple normal operating states. They appear since independent process variables may vary inside a considerable operating range. Together with associated dependent variables, they define a set of operating states being all of them normal. This poses another challenge to be faced when modelling continuous chemical processes. How to model this set of multiple normal operating states as well as the frequent interchanges among them? For example, this point is crucial in order to obtain low false alarm rates when developing automatic and reliable monitoring systems, which contributes to reach early fault detection. This work investigates the potential of the hidden Markov model technique to describe multiple normal operating states in continuous chemical processes. This data-driven approach is a statistical pattern recognition system for sequential data. The motivation of its use is the possibility of describing subsets of normal operating (physical) states by a set of model states, i.e. the Markov chain, which are governed by a state transition matrix. A simulated heat exchanger unit is used as case study. The results are compared with classical techniques in the context of chemical process monitoring. This work is in progress and a better performance of the hidden Markov modelling with respect to false alarm rate and early fault detection is expected. This expectancy is grounded in a previous work conducted by part of the authors (Almeida and Park, 2012).

    Reference:

    Almeida, G.M., Park, S.W. Fault detection in continuous industrial chemical processes: A new approach using the hidden Markov modeling. Case study: A boiler from a Brazilian cellulose pulp mill. Yin, H. et al. (eds.) 13th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL), LNCS, 7435, 743-752, Springer, 2012.
  • Using DoE and Tolerance Intervals to Verify Specifications

    Authors: Pat Whitcomb (Stat-Ease, Inc.)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: DoE, Tolerance intervals, Software, Specifications
    Submitted at 26-Jun-2014 17:22 by Pat Whitcomb
    Accepted (view paper)
    22-Sep-2014 16:20 Using DoE and Tolerance Intervals to Verify Specifications
    Design of experiments, followed by numeric and graphical optimization, is frequently used by engineers to optimize a process, and to define a design space or operating window. It is critical to understand the boundaries of this operating window – if you choose a specific set of operating conditions close to the boundary, what is the likelihood that normal process variation will cause the resulting measurements to be out of specification? To ensure critical quality characteristics are meeting specifications, uncertainty (variability) must be accounted for in defining the boundaries of the design space. Design-Expert® software accounts for this uncertainty via confidence intervals and tolerance intervals. These intervals allow engineers to present a more meaningful design space, one in which they have confidence that the results will be reliable. A “tableting” process is used to illustrate the method.
  • Comparison of Some Factorial Designs when the Variation around Nominal is Noise in Robust Design

    Authors: Magnus Arnér (Tetra Pak)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Robust design, Variation around nominal, Combined arrays, Central composite designs
    Submitted at 26-Jun-2014 21:45 by Magnus Arnér
    Accepted
    23-Sep-2014 15:00 Comparison of Some Factorial Designs when the Variation around Nominal is Noise in Robust Design
    The idea in robust design is to find settings or values of the control factors that makes the response as insensitive as possible to the value taken by noise factors, i.e. for y=f(x,z) (where x are the control factors and z the noise) we search for values of x making df(x,z)/dz small. Factorial designs can be used to find these values of the control factors and the key to robustness is in the control-by-noise interactions.

    A common type of noise factors is aberration from a nominal value. This may be the case for, say, the shore value of rubber bushings, the thickness of a washer, or in any other case of mass production and part-to-part variation. There are then several possible factorial designs that can be used for robust design activities. One is to treat the noise and control as separate factors even though they represent the same physical attribute. In the experimental design, the control factors would then form a design cube, and the noise factors smaller cubes around each corner of the design cube of the control factors, so that the sensitivity to a small disturbance in the control factor cube is investigated. The key in the analysis is the control-by-noise interactions. However, since the aim is to get df(x,z)/dz small, which in this case alternatively could be expressed as df(x+z)/dz another possible experimental design is a central composite design. Then, look for second order effects rather than interaction effects. In this presentation, we look at the drawbacks and advantages of these two approaches.
  • Practical Issues Related to SLP-Based Load Allocation in Liberalized Electricity and Gas Markets

    Authors: Christian Ritter (Université Catholique de Louvain), Anne De Frenne (Math-X)
    Primary area of focus / application: Modelling
    Keywords: Electricity load, SLP estimation, Load allocation, Belgian electricity market
    Submitted at 2-Jul-2014 17:39 by Christian Ritter
    Accepted
    23-Sep-2014 09:40 Practical Issues Related to SLP-Based Load Allocation in Liberalized Electricity and Gas Markets
    Theoretically, estimates of load patterns from statistical samples of clients equipped with special meters are efficient and accurate. The corresponding models are called SLP (synthetic load profiles). They can be weighted by client portfolios and serve as a basis to arbitrate allocation residuals. Practically, it is hard to maintain such client samples. Usually, informed consent by the client is necessary and also some other technical prerequisites must be fulfilled. In particular, the specialized meters have to be connected to the electricity distribution and they need mobile phone coverage. Moreover, clients change consumption patterns and their providers do not always keep track of such changes. Therefore, actual client samples for use with SLP estimation are not necessarily representative. In addition, they often have systematically missing data. In this talk, we report on our observations with SLP estimation for the Belgian markets since 2002.
  • Optimal Monitoring Design for Energy Transmission Systems

    Authors: Dirk Surmann (Technische Universität Dortmund), Uwe Ligges (Technische Universität Dortmund), Claus Weihs (Technische Universität Dortmund)
    Primary area of focus / application: Design and analysis of experiments
    Keywords: Energy transmission systems, Design of Experiments, D-optimal designs, Low frequency oscillation
    Submitted at 2-Jul-2014 17:43 by Dirk Surmann
    Accepted
    23-Sep-2014 09:00 Optimal Monitoring Design for Energy Transmission Systems
    The European electrical transmission system is working closely to its operational limits due to market integration, energy trading and the increasing feed-in by renewable energies. Therefore the system has become more vulnerable for disturbances in different areas, for example energy permanently oscillating with a low frequency. Analysing this Low Frequency Oscillation (LFO) requires measurements of voltage angle and magnitude at different positions in the transmission system. Due to the fact that the considered system consist out of a large number of nodes, our aim is to derive a subset of nodes which contains sufficient information about the LFO. This subset is easier manageable than interrogating all nodes.

    In order to achieve our aim we derive a parameter based on the model of Low Frequency
    Oscillation which characterises every single node. Via analysing the behaviour of each node with respect to its neighbours, we construct a feasible linear metamodel over the whole transmission system. We apply convex design of experiment theory, especially the D-criterion, to the metamodel. This results in a subset of nodes which contain the most information about the European electrical transmission system. The talk will describe the quality of the D-optimal design by comparing it with differently selected subsets of nodes and a uniform design over all nodes. In comparison to the alternatives the D-optimal design comprises a subset with the minimum number of nodes guaranteeing sufficient amount of data.