ENBIS-19 in Budapest

2 – 4 September 2019; Eötvös Loránd University, Budapest Abstract submission: 16 December 2018 – 20 May 2019

My abstracts

 

The following abstracts have been accepted for this event:

  • Identify Important Indicators for Corporate Social Responsibility (CSR) using QFD

    Authors: Shuki Dror (ORT Braude College), Natalia Zaitsev (ORT Braude College)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Modelling
    Keywords: Corporate social responsibility, Business performance, CSR outcomes, Quality function deployment, Mean Square Error, Decision-making
    Submitted at 21-Dec-2018 10:54 by Shuki Dror
    Accepted
    Corporate social responsibility (CSR) strategies encourage the company to make a positive impact on the environment and stakeholders, which include consumers, employees, investors, communities and others. To enrich the practice of CSR initiatives, we developed an approach for examining the relationship between various CSR indicators (CSRIs) and outcomes for a specific enterprise, based on the quality function deployment (QFD) method. For a specific business case (data were collected from a manufacturing plant in the chemical engineering and energy industry), an adapted House of Quality (HOQ) matrix was created using a combined input from various senior and line managers. This matrix summarises the desired improvements in the CSR results and connects them to the relevant reportable CSRIs. Based on the
    HOQ matrix, indicators and outcomes that maximise the desired results of the CSR policy were chosen using the mean square error criterion. We found that when utilising the QFD-based approach, the business could quantify strategic priorities regarding CSR initiatives. The applied approach offers a scientific/engineering method for identifying a subset of vital CSRIs necessary to achieve the best CSR outcomes for all types of businesses at all stages of their development.
  • Semi-Parametric Profile Monitoring Control Chart for Phase I via Mixed Residuals

    Authors: Abdel-Salam G. Abdel-Salam (Qatar University)
    Primary area of focus / application: Process
    Secondary area of focus / application: Quality
    Keywords: Model robust profile monitoring, Semiparametric residual model, Model misspecification, T2 Control Chart, Mixed logistic
    Submitted at 21-Dec-2018 11:24 by Abdel-Salam Gomaa
    Accepted
    In typical analyses of data better-modeled by a nonlinear mixed model, an unusual observation (profile), either within a cluster, or an entire cluster itself, can greatly distort parameter estimates and subsequent standard errors. Consequently, inferences about the parameters are misleading. This research introduces a novel semiparametric control chart for modeling autocorrelated profile data, a nonparametric (NP) and a semiparametric procedure that combines both parametric and NP profile fits based on the residuals. In addition, however, the semiparametric method is robust to model misspecification because it also performs well when compared to a correctly specified mixed parametric model. The proposed control charts showed excellent capability for detecting changes in Phase I data. An example is given using the mixed parametric logistic model and medical data, comparing the robust approaches to the non-robust one.
  • Theory and Model of Technological Hype Cycles

    Authors: Avi Messica (The College of Management (COMAS)), Asnat Greenstein-Messica (Ben-Gurion University of the Negev)
    Primary area of focus / application: Business
    Secondary area of focus / application: Modelling
    Keywords: Hype, Cycle, Technology, Theory, Model, Business, Google trends
    Submitted at 23-Dec-2018 16:53 by Avi Messica
    Accepted
    A new emerging technology, often viewed as disruptive, occasionally generates a wave of high exposure in the media in tandem with high, rising, expectations over its potential application. These expectations play an important part in the further development of the technology by attracting funding sources (e.g. venture capital investments, governmental funds and the like) as well as entrepreneurs and public interest. Such a wave of high rising expectations is termed as ‘hype’ and appears in business and science domains as well. Complex interplay of expectations on different levels (social trends, economic contribution, ethical consequences etc.) has strong influence on the effects of disappointment from an unmaterialized emerging technology. Former studies have focused mainly on case studies and relied on descriptive tools to study and explain specific dynamics in a specific context. Recently, an initial an effort to analyze three cases of high-technology innovations in Japan was made via curve fitting of Gompertz, logistic and polynomial (of degree 9) functions using a corpus of 4,772 newspaper articles.
    In this paper we present a mathematical model that is able to explain various patterns of hype cycles. Our contribution is twofold. Firstly, we used online-search data (via Google Trends) as a proxy for expectations. Secondly, we theorize expectations as "infectious disease" where contraction is equivalent to viral, online, diffusion and expectations' decline as "disease recovery".
    The results of this study allow to quantify technological hype cycles via well-defined, new, metrics that will be presented and discussed.
  • Sensitivity analysis, an introduction

    Authors: Andrea Saltelli (University of Bergen, Centre for the Study of the Sciences and the Humanities (SVT))
    Primary area of focus / application: Reliability
    Secondary area of focus / application: Quality
    Keywords: Sensitivity analysis, Uncertainty analysis, Uncertainty quantification, Model validation, Model verification, Sensitivity auditing
    Submitted at 8-Jan-2019 00:16 by Andrea Saltelli
    Accepted
    "Are the results from a particular model more sensitive to changes in the model and the methods used to estimate its parameters, or to changes in the data?"
    This remark by Giandomenico Majone goes the heart of the problem setting of sensitivity analysis, a tool which all modellers from all fields of application use to improve the quality of their inference. Sensitivity Analysis is crucial both in the model construction and model interpretation phases, and is considered an important ingredient of model verification and validation.
    Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem (Source: https://en.wikipedia.org/wiki/Sensitivity_analysis)
    The talk will review some principles of sensitivity analysis, good and bad practices, and some practitioners’ insight on when to use what.

    Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D. Saisana, M., Tarantola, S., 2008, Global Sensitivity Analysis. The Primer, John Wiley & Sons publishers.

    Saltelli, A., Annoni, P., 2010, How to avoid a perfunctory sensitivity analysis, Environmental Modeling and Software, 25, 1508-1517.

    Andrea Saltelli, Ksenia Aleksankina, William Becker, Pamela Fennell,
    Federico Ferretti, Niels Holst, Sushan Li, Qiongli Wu, 2018, Why So Many Published Sensitivity Analyses Are False. A Systematic Review of Sensitivity Analysis Practices, available on ArXiv, revised fro Environmental Modelling and Software.
  • A Framework with Indices to Evaluate Process Health

    Authors: Kevin White (Eastman), John Szarka (W.L. Gore & Associates, Inc.), Willis Jensen (W.L. Gore & Associates, Inc.)
    Primary area of focus / application: Quality
    Secondary area of focus / application: Other: Invited U.S. Session
    Keywords: Quality, Improvement, Capability, Stability, Variation, Measurement
    Submitted at 10-Jan-2019 16:11 by Kevin White
    Accepted
    Assessment of process health is an important aspect of any quality system. While control charts are often used in the day-to-day operations of processes, it is often helpful to take a retrospective look at past performance in the form of an overall process health assessment to identify specific opportunities for future improvement efforts. This presentation will provide an innovative framework to evaluate various aspects of past process performance including actual performance compared to specifications, process stability, process centering, and potential process capability. Within this framework, it will be shown how a set of process performance indices along with powerful tabular and graphical methods can easily identify the most impactful improvement opportunities whether evaluating a single process or numerous processes simultaneously. The case of two-sided specifications and one-sided specifications (with and without a defined target) will be covered. The framework will also explore the connection between the process and measurement system to understand when the measurement variability is limiting factor on process performance. Ultimately, the framework will enable users to not only identify the largest opportunities, but the methodology will also point to the type of improvement effort needed.
  • From Data Points to Data Dan: Combining Log Analysis: Survey Analysis and Interviews to Segment Google Analytics Customers

    Authors: Sundar Dorai-Raj (Google)
    Primary area of focus / application: Business
    Secondary area of focus / application: Mining
    Keywords: Clustering, Text mining, Survey analysis, User experience, Logs analysis
    Submitted at 27-Jan-2019 17:29 by Sundar Dorai-Raj
    Accepted
    Google Analytics has a wide user base, from hobbyist bloggers to employees of Fortune 100 corporations. In order to better understand our users, and to get more precision around the proportion of each user type that make up our customer base, we embarked on a customer segmentation project. This long-term research project used both qualitative and quantitative methods to scope and define customer “use cases,” or particular tasks that directed the front-end interactions of a user’s session. Our quantitative approach consisted of collecting all front-end user interactions, and performing Latent Dirichlet Analysis to arrive at groupings of 25 use cases, as well as conducting a survey to investigate how users’ background impact their usage. In parallel, our qualitative approach included over 50 subject interviews to understand what use cases were important from the user’s perspective. We used this research, along with product subject matter experts, to help assign labels to each of our use case parameter groupings. Using the labeled LDA topics, we measured engagement by user across each, and performed k-means clustering on individual users to arrive at 12 user segments. The qualitative interpretation of these clusters through 40 interviews led to a set of personas, which will provide further inspiration for product development.