ENBIS-15 in Prague

6 – 10 September 2015; Prague, Czech Republic Abstract submission: 1 February – 3 July 2015

My abstracts

 

The following abstracts have been accepted for this event:

  • Newsvendor Model in Presence of Correlated Discrete Demand

    Authors: Christian Weiß (Department of Mathematics and Statistics, Helmut Schmidt University Hamburg), Layth C. Alwan (Sheldon B. Lubar School of Business, University of Wisconsin-Milwaukee)
    Primary area of focus / application: Business
    Secondary area of focus / application: Modelling
    Keywords: Newsvendor model, Discrete demand, INAR(1) model, Cost-optimal orders, Approximations
    Submitted at 19-Mar-2015 08:08 by Christian Weiß
    Accepted
    9-Sep-2015 09:00 Newsvendor Model in Presence of Correlated Discrete Demand
    The classic newsvendor model was developed in the context of controlling inventory systems of "perishable" goods, that is, goods that cannot be carried from one period to the next. In most applications of the newsvendor model, the demand is on a discrete item. This would imply that the observed demand process is a time series of count data.

    In this presentation, we propose the implementation of the predictive INAR(1) methodology for establishing the newsvendor order quantity for each forthcoming period. After having briefly introduced the general INAR(1) model, we provide a real case application about blood demand collected from a large regional hospital in southeastern Wisconsin. We consider the traditional newsvendor model and contrast it with the implementation based on the INAR(1) model. We conduct a comparative analysis to investigate the benefits of the newly proposed approach.
  • How to Understand Complex Datasets Using Graphical Approaches

    Authors: Bertram Schäfer (Statcon), Sebastian Hoffmeister (Statcon)
    Primary area of focus / application: Education & Thinking
    Secondary area of focus / application: Modelling
    Keywords: Graphical modeling, Data exploration, Software requirements, Interactive graphs
    Submitted at 27-Mar-2015 13:52 by Bertram Schäfer
    Accepted
    8-Sep-2015 12:10 How to Understand Complex Datasets Using Graphical Approaches
    Graphical methods should be omnipresent, but they are not! Graphs are used widely in data exploration such as outlier identification. Graphs are also frequently used to display the results of formal statistical analysis.
    The learning based on simple descriptive uni- or bivariate graphs is often limited to exploration and not used for a tentative data understanding and thus a first step into modeling. Graphical methods are only rarely used as a tool to help in the process of generating hypothesis. The Interaction between different graphs as well as between graphs and the data table are a necessity in this respect.
  • A Comparison of the Predictive Power of Response Surface Designs and Neural Networks

    Authors: Matthew Dodson (University of Michigan), Rene Klerx (SKF Group Six Sigma), Mark Tooley (Siena Heights University)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Six Sigma
    Keywords: DOE, Response surface models, Neural networks, Blended approach
    Submitted at 27-Mar-2015 16:35 by Rene Klerx
    Accepted (view paper)
    Neural networks have proven to provide powerful predictive models for messy data. Response surface designs, specifically second order polynomials have proven to provide useful predictions in many types of situations, especially when the experimental data is controlled. A study to compare the two approaches will be presented to compare the predictive power of the two approaches, and recommendations of when each approach should be favoured, including situations when a blended approach of the two methodologies is preferred. A series of engineering systems with interactions ranging from minor to strong, and data in structured formats as well as a messy formats will be modelled with a response surface as well as a neural network. A comparison of the residuals will provide insight, and recommendations of when each approach should be favoured, and how the approaches can be combined.
  • A Simple Unimodal Approximation of a Sum of Independent Non-Identical Lognormal Random Variables for Financial and Other Applications

    Authors: Avi Messica (The College of Management (COMAS))
    Primary area of focus / application: Finance
    Secondary area of focus / application: Modelling
    Keywords: Sum, Lognormal, Random variables, Unimodality, Finance
    Submitted at 29-Mar-2015 10:01 by Avi Messica
    Accepted (view paper)
    8-Sep-2015 15:55 A Simple Unimodal Approximation of a Sum of Independent Non-Identical Lognormal Random Variables for Financial and Other Applications
    The distribution function of a sum of non-identical lognormal random variables (RVs) is required in many fields and in particular in finance for portfolio optimization and exotic options valuations (e.g. basket, Asian options). Unfortunately, it has no known closed form. Therefore one has to resort to using an approximation, especially when practical implementation is involved. Most of the approximations of the sum of lognormals are complicated, difficult to implement and assume that it is unimodal even though this might not be the case. Based on the central limit theorem, this paper presents a new, simple and easy to implement, approximation method that can be used in finance, as well as in other fields, by both scholars and practitioners. In addition, using the Banach fixed point theorem, I derive the necessary condition for the unimodality of the sum of lognormal RVs. Using this condition it is also possible to approximate the distribution of a multimodal distribution function of a sum of non-identical lognormal random variables. The accuracy of the method is compared against the results of Monte Carlo simulations.
  • House of Security (HOS) for Preventing Human Errors

    Authors: Shuki Dror (ORT Braude College), Emil Bashkansky (ORT Braude College)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Quality
    Keywords: Human Errors, QFD, TRIZ, FMECA
    Submitted at 30-Mar-2015 08:53 by Shuki Dror
    Accepted
    8-Sep-2015 17:00 House of Security (HOS) for Preventing Human Errors
    We apply well-known quality engineering matrix techniques such as QFD, TRIZ, and FMECA for characterizing, mapping and preventing human error (or, at least, reducing damage caused by errors). Human errors (“WHAT-s”, in the language of QFD) are classified according to ten characteristics, while 20 typical types (or protective layers)—“HOW-s”—in quality assurance systems (QAS), are proposed for preventing/stopping/minimizing to some extent damage caused by the error. During the analysis of a specific system, any error is estimated according to its likelihood and severity, and every protective layer receives a score according to its effectiveness in preventing errors. Synergy or antagonism between protective layers may also be taken into account when calculating the effectiveness. The approach facilitates evaluation and comparison of the effectiveness of different quality assurance systems dealing with human errors. The authors emphasize the need to create a “recipe book” based on a historical database, which will enable, after characterizing the potential human errors according to the ten criteria mentioned above, application of the optimal prevention efforts. The proposed approach is illustrated by an example of product delivery errors analysis.
  • Mixture of Experts for Sequential PM10 Forecasting in Normandy (France)

    Authors: Jean-Michel Poggi (University of Paris Sud - Orsay), Benjamin Auder (University of Paris Sud - Orsay), Bruno Portier (Normandie Université, INSA Rouen)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Mining
    Keywords: Air quality, Forecasting, Mixture of experts, PM10, Sequential prediction, Environment
    Submitted at 30-Mar-2015 18:56 by Jean-Michel Poggi
    Accepted (view paper)
    7-Sep-2015 10:00 Mixture of Experts for Sequential PM10 Forecasting in Normandy (France)
    In Normandy (France), Air Normand together with Air COM monitor air quality. During a recent research project between academia and air quality agency, the statistical forecasting of PM10 has been considered, with the aim of improving warning procedures led to the development of operational procedures allowing to forecast the daily average of the PM10 for the current day and for the next day on various horizons of forecast integrating the meteorological information and the model outputs statistics.

    More generally, Air Normand has various operational tools for the analysis of episodes and for the interpretation of measures, in view of decisions. However these complementary tools, statistical or deterministic models, local or global, often supplying different forecasts especially because of the various space and time resolutions considered. In this paper, we evaluate the interest of using sequential aggregation or mixing of experts to develop decision-making tools for the forecasters of Air Normand.

    In the context of sequential prediction, experts make predictions at each time instance, and the forecaster must determine step by step, the future values of an observed time series. To build his prediction, it/he has to combine before each instant the forecasts of a finite set of experts. To do so, adopting the deterministic and robust view of the literature of the prediction of individual sequences, three basic references can be highlighted: Clemen (1989), Cesa-Bianchi and Lugosi (2006) and Stoltz (2010).

    In the application framework at hand, empirical studies are particularly valuable and we can mention some studies. In the area of climate Monteleoni et al. (2011), in the field of the air quality Mallet (2010), Mallet et al. (2009), the use of the quantile prediction of the number of daily calls in a call center Biau et al. (2011) and finally the prediction of electricity consumption Devaine et al. (2013). These studies focus on the rules of aggregation of a set of experts and examine how to weight and combine these experts.

    The contribution of our study is multiple. First of all the scope - the adaptation to the effective context of pollution engineer forecaster - but the main novelty is that the set of experts contains both:
    - Experts coming from statistical models constructed using different methods and different set of predictors;
    - Experts defined by deterministic models of physicochemical prediction modeling pollution, weather and atmosphere. The models are of similar nature but of different spatial and time resolutions with or without statistical adaptation;
    - And finally references such as persistence, as usual.

    The aforementioned studies combine "homogeneous" methods: only statistical methods or only deterministic ones. Sequential prediction allows mixing several models built on very different assumptions in a unified approach that does not require any prior knowledge about the internal way to use for each expert to generate predictions. It is therefore particularly suitable for our application.