ENBIS-20 Online Conference

28 September – 1 October 2020; Online

George Box Medal Session

SEASONAL WARRANTY PREDICTION BASED ON RECURRENT EVENT DATA

William Q. Meeker, PhD, Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University, USA, a past Editor of Technometrics, ASQ Shewhart Medal and ASA’s Deming Lecture Award winner

 

Wednesday, 30 September 2020, 15:00-16:00 CEST

Berlin, Paris: 15:00 / London: 14:00 / New York: 9:00 am / Los Angeles: 6:00 am / Ciudad de México, Lima: 8:00 am / São Paulo: 10:00 am / Beijing: 21:00

Abstract:

Warranty return data from repairable systems, such as home appliances, lawn mowers, computers, and automobiles, result in recurrent event data. The non-homogeneous Poisson process (NHPP) model is used widely to describe such data. Seasonality in the repair frequencies and other variabilities, however, complicate the modeling of recurrent event data. Not much work has been done to address the seasonality, and this paper provides a general approach for the application of NHPP models with dynamic covariates to predict seasonal warranty returns. The methods presented here, however, can be applied to other applications that result in seasonal recurrent event data. A hierarchical clustering method is used to stratify the population into groups that are more homogeneous than the overall population. The stratification facilitates modeling the recurrent event data with both time-varying and time-constant covariates. We demonstrate and validate the models using warranty claims data for two different types of products. The results show that our approach provides important improvements in the predictive power of monthly events compared with models that do not take the seasonality and covariates into account. This talk is based on joint work with Qianqian Shan (Amazon) and Yili Hong (Virginia Tech).

Biography:

Dr. William Q. Meeker is Professor of Statistics and Distinguished Professor of Liberal Arts and Sciences at Iowa State University. He has more than 40 years of experience working in the application of statistical methods to engineering applications including reliability and nondestructive evaluation. He has done research and consulted extensively on problems in reliability data analysis, warranty analysis, experimental design, accelerated testing, nondestructive evaluation, and statistical computing. His practical experience includes numerous long-term visits to AT&T Bell Laboratories, General Electric Global Research, and Los Alamos National Laboratory. He is a Fellow of the American Statistical Association (ASA), the American Society for Quality (ASQ), and the American Association for the Advancement of Science, and a past Editor of Technometrics. He is co-author of the books Statistical Methods for Reliability Data with Luis Escobar (1998), the second edition of Statistical Intervals with Luis Escobar and Gerald Hahn (2017), 14 book chapters, and many publications in the engineering and statistical literature. He has won numerous awards for his research and contributions to the statistical and engineering professions including the ASQ Shewhart Medal and ASA’s Deming Lecture Award.


Best Manager Award Session

NEW HABITS OF STATISTICAL THINKING IN INDUSTRY FOR A NEW AREA OF DATA COLLECTION

Lourdes Pozueta Fernández, PhD, Professor at the Industrial Engineering School at UPC, Barcelona, Project Leader at the Technological Centre in Spain, TECNALIA, and CEO of AVANCEX.

 

Tuesday, 29 September 2020, 15:00-15:30 CEST

Berlin, Paris: 15:00 / London: 14:00 / New York: 9:00 am / Los Angeles: 6:00 am / Ciudad de México, Lima: 8:00 am / São Paulo: 10:00 am / Beijing: 21:00

Abstract:

The ease of data collection in the industry is a great opportunity to do business working to reduce costs of inefficiencies. However, having the opportunity to collect data does not imply achieving value with its treatment. There are numerous weak elements in the culture of organizations related to the ability of people to exploit the value of data. Currently the habit of looking at data as people look at Business Intelligence topics is negatively influencing the problem-solving environment: aggregated data hides the origin of the variability since the opportunity is in the detail. Expert collaboration in each area is necessary to integrate knowledge of processes, knowledge of data capture and exploitation, and knowledge of complex problem-solving skills to achieve means of doing business by exploiting the information found in the detail of the data of each process. I will present examples of how "the standard way of thinking based on means", and "the standard way of look at aggregated visualizations" don't allow to found the value of the data.

Biography:

After a Master in Science-Statistics (University of Madison-Wisconsin, USA, 1991), a PhD in Industrial Statistics (Polytechnic U. of Catalunya, ETSEIB Barcelona, 2001) and a Master Black Belt (Polytechnic U. of Catalunya, Barcelona, Spain, 2004), Lourdes has dedicated her professional life to the practice and dissemination of Statistics and Statistical Engineering, in a broad environment that includes university education, technology centres, industry and society. She also participates as a volunteer to support STEAM education. She has been working as professor for 11 years at the Industrial Engineering School at UPC, Barcelona and as Project Leader at the Technological Centre in Spain, TECNALIA. She created her own enterprise AVANCEX, a SME located in the Basque Country dedicated to consulting and training in Statistical Engineering. She is the CEO from the last 13 years.


Young Statistician Award Session

KERNEL-BASED APPROACHES COMBINED TO PSEUDO-SAMPLE PROJECTION FOR INDUSTRIAL APPLICATIONS: BATCH PROCESS MONITORING AND ANALYSIS OF MIXTURE DESIGNS OF EXPERIMENTS

Raffaele Vitale, PhD, Postdoctoral Associate at KU Leuven, Belgium

 

Tuesday, 29 September 2020, 15:30-16:00 CEST

Berlin, Paris: 15:30 / London: 14:30 / New York: 9:30 am / Los Angeles: 6:30 am / Ciudad de México, Lima: 8:30 am / São Paulo: 10:30 am / Beijing: 21:30

Abstract:

Although Principal Component Analysis (PCA) and Partial Least Squares regression (PLS) are currently recognised as some of the most powerful approaches for the analysis and interpretation of multivariate data especially in the field of industrial processes, strong non-linear relationships among objects and/or variables may represent a difficult issue to solve when one tries to model them by means of these methods. In similar contingencies, a good alternative is represented by the so-called kernel-based techniques, which have already been broadly used in, e.g., chemistry and biology. Even if kernel-based approaches allow to easily cope with strong non-linearities in data, their main disadvantage is that the information about the importance of the original variables in the final models is lost. Recently, the principles of non-linear bi-plots and so-called pseudo-sample projection, originally described by Gower and Hardings in 1988, have been extended to overcome this limitation. Here, they will be adapted and exploited to enable kernel model interpretation. More in detail, this work will be focused on evaluating the power of kernel-based methodologies coupled to pseudo-sample projection in 2 different scenarios of paramount importance for manufacturing industries: batch process monitoring and analysis of mixture designs of experiments. All the case studies that will be presented will highlight how such a combination can be particularly useful in those contexts where huge amounts of complex information are routinely collected (as in modern manufacturing scenarios) and can be easily resorted to for a wide range of applications. Particular attention will be paid to some new intuitive graphical tools – based on the concept of pseudo-sample projection – implemented to support users in the complicated task of kernel model assessment, thus facilitating and accelerating decision making and troubleshooting. This provides a striking advantage over classical machine-learning techniques which still suffer from the drawback of being full black-box methodologies. This talk is based on joint work with Daniel Palací-López, Onno de Noord and Alberto Ferrer.

Biography:

Dr. Raffaele Vitale graduated in Analytical Chemistry in 2011 (Università di Roma “La Sapienza”, Italy) and obtained his Ph.D. title in Statistics and Optimization in 2017 (Universitat Politècnica de València, Spain) with a thesis entitled Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation. He is currently working as Postdoctoral Associate at KU Leuven in Belgium in the framework of the ADGut project for the investigation of the developmental mechanisms of Alzheimer’s disease. Raffaele is author of 23 peer-reviewed publications, has been awarded the Best Italian Master’s Thesis in Analytical Chemistry prize in 2012, the International Association of Spectral Imaging student prize in 2016, the V Siemens Process Analytics Prize for Young Scientist in 2017 and the III Jean-Pierre Huvenne Award for the Best Ph.D. Thesis in Chemometrics in 2019.