ENBIS-17 in Naples

9 – 14 September 2017; Naples (Italy) Abstract submission: 21 November 2016 – 10 May 2017

My abstracts

 

The following abstracts have been accepted for this event:

  • Big Data Analytics for Online Monitoring of Processes

    Authors: Flavia Dalia Frumosu (Technical University of Denmark), Murat Kulahci (Technical University of Denmark)
    Primary area of focus / application: Process
    Secondary area of focus / application: Mining
    Keywords: Big Data, Big Data analytics, Process monitoring, Machine Learning
    Submitted at 6-Apr-2017 16:00 by Flavia Dalia Frumosu
    Accepted (view paper)
    12-Sep-2017 16:00 Big Data Analytics for Online Monitoring of Processes
    The expanding availability of more complex data structures requires development of new analysis methods for process understanding and monitoring. The complex nature of the data is given by readily available high frequency and high dimensional data. There has been a considerable effort in incorporating latent structure based methods in the context of complex data. On the other hand, machine learning methods have received less focus as they have been primarily used for predictive objectives. In this paper we will explore through examples the use of machine learning methods further mentioned as big data analytics in the pursuit of process monitoring and control.
  • Data for Planning and Monitoring Smart and Sustainable Cities: Downscaling and Validation Perspectives

    Authors: Gilles Plessis (EIFER), Manon Pons (EIFER), Alberto Pasanisi (EIFER)
    Primary area of focus / application: Other: smart city
    Secondary area of focus / application: Modelling
    Keywords: City, Simulation, Data, Validation, Downscaling
    Submitted at 7-Apr-2017 09:13 by Gilles Plessis
    Accepted
    13-Sep-2017 09:00 Data for Planning and Monitoring Smart and Sustainable Cities: Downscaling and Validation Perspectives
    In a worldwide context of growing urbanisation, planning and development of smart and sustainable cities are oftentimes seen as a way to rationalize energy consumption and decrease greenhouse gases emissions with the aim to mitigate global warming.
    Moreover, the “open data” transition, facilitated by increased computing performances and the multiplication of connected IT devices, and encouraged by public support is a reality today. In that context, public and free data are more and more available in a proper standardised format.
    That creates great opportunities to develop tools supporting stakeholders of urban development in early planning phases but also giving them the possibility to monitor ex-post the effects of particular measures (e.g. energy savings).
    This communication proposes a case study focusing on the residential sector. A simulation tool based on national public data and domain expertise to determine energy consumption and potential energy efficient measures is presented. Results at local scale are compared to real data and bias are identified.
  • Retail Merchandising Based on Analytics and Experts' Judgement

    Authors: Giuseppe Craparotta (Università di Torino), Roberta Sirovich (Università di Torino), Elena Marocco (Università di Torino)
    Primary area of focus / application: Business
    Secondary area of focus / application: Mining
    Keywords: Retail, Artificial intelligence, Forecasting, Innovation, Luxury, Fashion
    Submitted at 7-Apr-2017 22:49 by giuseppe craparotta
    Accepted (view paper)
    13-Sep-2017 10:10 Retail Merchandising Based on Analytics and Experts' Judgement
    Retail stock allocation is crucial but challenging for verticals whose sales are difficult to accurately predict. The authors developed an innovative solution in the context of a high-end fashion application based on collaboration between artificial intelligence and human intuition. Each week, stores are assigned a budget based on current stock levels versus potential sales, and a list of SKU/quantity to order and release is recommended. Each store manager is given a time window to modify the proposal respecting budget constraints. The artificial intelligence optimally allocates stock based on the requests and the expected likelihood of sale minus cost of logistics, plus other management-defined constraints. A test successfully outperformed the control stores who relied on traditional headoffice-driven allocation. The retailer boosted sales, demand cover, and stock rotation with an impact worth an estimated 1M EUR margin/month. Moreover, the new system improved store performance and store managers' morale through non-monetary incentive-driven empowerment.
  • Phase I Performance of Control Charts under Temporally Random Contamination of Observations

    Authors: Murat Caner Testik (Hacettepe University), Christian H. Weiß (Helmut Schmidt University), Yesim Koca (Hacettepe University), Ozlem Muge Testik (Hacettepe University)
    Primary area of focus / application: Process
    Keywords: Statistical Process Control, Phase I study, Parameter estimation, Effects of estimation
    Submitted at 10-Apr-2017 21:50 by Murat Caner Testik
    Accepted
    13-Sep-2017 10:30 Phase I Performance of Control Charts under Temporally Random Contamination of Observations
    Several researches have been performed to study the effects of parameter estimation in a Phase I study of control charts. However, these studies mostly considered that reference sets of observations used in estimation of parameters are from an in-control process and accordingly studied the effects of sampling variability on the Phase II performance of control charts. In this study, a different approach is taken and it is considered that the reference sets of observations used in Phase I studies may be contaminated. To detect the contaminated samples representing out-of-control process states and to estimate process parameters, Shewhart control charts for sample averages and sample standard deviation were simulated for normally distributed observations. Iterative implementation of control limit revisions were performed as recommended in text books. Contaminations were generated as random shifts that may affect none, one or more of the observations in a sample. Results of performance metrics such as the number of iterations, true and false alarm percentages are discussed under several scenarios that may be useful in practical settings.
  • Stochastic Delay Times in Failure Prediction

    Authors: Lennart Kann (Robert Bosch GmbH), Rainer Göb (University of Würzburg)
    Primary area of focus / application: Other: Sampling
    Keywords: Life data analysis, Prediction intervals, Risk analysis, Sampling
    Submitted at 12-Apr-2017 13:36 by Lennart Kann
    Accepted
    12-Sep-2017 10:30 Stochastic Delay Times in Failure Prediction
    In many industrial applications, the number of future of failures in a specified time period has to be predicted from censored field data. To account for uncertainty, prediction intervals are often used for this purpose. For a parametric lifetime model, the data to estimate the model parameters consists of the population size and a fixed censoring time. However, the reporting of field failures is not instantaneous and can vary significantly. Therefore, these delay times influence the parameter estimation and hence the resulting prediction interval. Using a continuous distribution to model the reporting delay, we study the effect on the resulting prediction interval and propose adjustments to account for delay times.
  • Recent Advances in the Use of Image Data in Quality Engineering: A Perspective, Updated Taxonomy and Some Guidelines for Future Work

    Authors: Fadel Megahed (Miami University)
    Primary area of focus / application: Other: Quality engineering applied to advanced manufacturing
    Secondary area of focus / application: Quality
    Keywords: Big Data, Control charts, Image-based monitoring, High dimensional data, Multivariate image analysis, Profile monitoring, Statistical Process Control
    Submitted at 12-Apr-2017 21:16 by Fadel Megahed
    Accepted
    11-Sep-2017 10:50 Recent Advances in the Use of Image Data in Quality Engineering: A Perspective, Updated Taxonomy and Some Guidelines for Future Work
    With the recent advances in machine vision systems and computing, there has been an increasing interest in examining the efficacy of using "image data" in quality engineering applications. From a quality engineering perspective, images are a somewhat unique source of data since they can capture both geometric (dimensions) and aesthetic (e.g., color and texture) features of a product. Currently, there are two main quality engineering applications and/or research streams for image data: (a) inspection, where an image is evaluated to determine whether the product (or quality characteristic) of interest is classified as "good" or "bad"; and (b) monitoring/ surveillance, where control charts are applied to detect changes in the images over time.

    In this talk, we focus on the monitoring and surveillance applications. Specifically, this research has the following goals: (A) Providing a taxonomy of the statistical analysis used in the image-based control charting literature. The taxonomy captures the type of analysis (retrospective versus perspective), quality engineering goals, the evaluation criteria used, etc. (B) Proposing a new standard for research in image-based quality engineering. The standard is aimed to ensure the repeatability of the analysis and to encourage/facilitate future research in this interesting domain. (C) Presenting a platform and repository for hosting image-based control charting applications. The purpose of this repository is to facilitate the interactions between the different researchers and to provide industry with some insights pertaining to the latest work.