ENBIS: European Network for Business and Industrial Statistics
Forgotten your password?
Not yet a member? Please register
ENBIS14 in Linz
21 – 25 September 2014; Johannes Kepler University, Linz, Austria Abstract submission: 23 January – 22 June 2014The following abstracts have been accepted for this event:

Statistical Model Checking for NonLife Insurance Claims Reserving Models
Authors: Jürg Schelldorfer (AXA Winterthur)
Primary area of focus / application: Finance
Secondary area of focus / application: Modelling
Keywords: Stochastic claims reserving, Chain ladder, Solvency, Runoff triangles
Submitted at 25Apr2014 20:22 by Jürg Schelldorfer
Accepted

Smart Power Semiconductor Reliability Evaluation Using a Gaussian Process based NorrisLandzberg Model
Authors: Kathrin Plankensteiner (KAI Kompetenzzentrum Automobil und Industrieelektronik GmbH), Olivia Bluder (KAI Kompetenzzentrum Automobil und Industrieelektronik GmbH), Jürgen Pilz (AlpenAdria Universität Klagenfurt)
Primary area of focus / application: Reliability
Secondary area of focus / application: Modelling
Keywords: Semiconductor reliability, Survival data, Bayesian inference, Gaussian process
Previous investigations [1, 4] clearly stated that frequently applied simple acceleration models like Arrhenius or CoffinManson cannot capture the behavior of the observed data. In case of interpolation more complex methods like a Bayesian MixturesofExperts NorrisLandzberg or Bayesian networks give accurate lifetime predictions, but for the main purpose, the extrapolation to other test conditions or designs, neither of them is sufficiently precise. Thus, it is hypothesized that for the observed data, in general, ordinary linear regression models cannot be applied for a reliable lifetime prediction.
To solve this problem, we propose the application of a Gaussian process [5] based NorrisLandzberg relationship, which shows a high degree of flexibility by exploiting sums or products of appropriate covariance functions, e.g. linear or exponential. Bayesian parameter learning is performed in MATLAB using a combination of elliptical slice [3] and surrogate slice [2] sampler for the latent variables and parameter estimation, respectively. Since data is censored and lognormal distributed, for this purpose the statistical toolbox GPstuff [6] has been extended. To provide a direct comparison between the investigated models, first, the two mixture components are modeled independently from each other, and then, they are mixed by estimated mixture weights, that can be modeled by a cumulative Beta distribution function.
The results show that the sum of a linear and a constant covariance function gives the best fit for both components. Moreover, based on the BIC evaluation, the Gaussian process model fits the observed data better than the currently applied Bayesian MixturesofExperts model, but similarly to it, in case of extrapolation, the investigated method features still room for improvement.
Bibliography:
[1] O. Bluder et al. (2012): A Bayesian Mixture CoffinManson Approach to Predict Semiconductor Lifetime. In: Proceedings of Stochastic Modeling Techniques and Data Analysis, pp.4552.
[2] I. Murray and R.P. Adams (2010): Slice Sampling Covariance Hyperparameters of Latent Gaussian Models. In: Proceedings of Advances in Neural Information Processing Systems 23, pp.17321741.
[3] I. Murray et al. (2010): Elliptical Slice Sampling. In: The Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp.541548.
[4] K. Plankensteiner et al. (2013): Application of Bayesian Networks to Predict Smart Power Semiconductor Lifetime. In: Proceedings of the 9th Conference on Ph. D. Research in Microelectronics and Electronics, pp.281284.
[5] J.Q. Shi and T. Choi (2011): Gaussian Process Regression Analysis for Functional Data. Chapman and Hall/CRC, Boca Raton.
[6] J. Vanhatalo et al. (2011): Bayesian Modeling with Gaussian Processes using MATLAB Toolbox GPstuff. Submitted, http://becs.aalto.fi/en/research/bayes/gpstuff/GPstuffDoc31.pdf. 
Area Scaling of Early Life Failure Probabilities with Multiple Reference Products in Semiconductor Manufacturing
Authors: Daniel Kurz (Department of Statistics, AlpenAdria University of Klagenfurt), Horst Lewitschnig (Infineon Technologies Austria AG, Villach), Jürgen Pilz (Department of Statistics, AlpenAdria University of Klagenfurt)
Primary area of focus / application: Reliability
Keywords: Area scaling, Binomial distribution, Burnin, Failure probability, System reliability
Submitted at 28Apr2014 08:53 by Daniel Kurz
Accepted
With the aim to reduce the failure rate of semiconductor devices before the delivery, the devices' early life is simulated by putting the final chips under accelerated voltage and temperature stress for a certain period of time. We refer to this as burnin (BI). However, full BI testing (i.e. the whole population of a product is put under BI stress) necessitates increased efforts in terms of costs, time and engineering resources. Therefore, the aim is to release 100%BI testing by proving a target failure probability of the produced devices in a BI study, in which a sample of the stressed items is investigated for early failures.
However, each product of a certain technology has to be verified to meet the target failure probability level. Since products from the same technology typically just differ with respect to their chip sizes, the classical approach is to assess the technology's failure probability level on a selected reference product and scale the obtained failure probability to follower products with different chip areas.
Nevertheless, it also appears that there are multiple reference products with different chip sizes, for which BI studies are performed. We here propose an estimation model, which makes use of the BI studies of all reference products to determine the failure probability levels of the follower products. This ensures an efficient handling of BI studies also in case of multiple reference products.
We discuss the model with respect to a combinatorial as well as a Bayesian background. From a combinatorial point of view, the idea is to consider BI studies of multiple reference products as samples from a population of equally sized chip areas with the same failure probability, which can then be estimated based on the information of all BI studies. On the contrary, the Bayesian option aims at generating a prior distribution from BI studies of comparable products.
The benefit of the proposed model is that it efficiently uses all of the available BI information to estimate the failure probability levels of the follower products. In this way, full BI testing can be skipped for follower products with sufficiently small chip sizes. Moreover, the proposed model enables us to efficiently determine the required number of additional inspections in the BI studies of the reference products in case of larger follower products.
Acknowledgment:
The work has been performed in the project EPT300, cofunded by grants from Austria, Germany, Italy, The Netherlands and the ENIAC Joint Undertaking. This project is cofunded within the programme "Forschung, Innovation und Technologie für Informationstechnologie" by the Austrian Ministry for Transport, Innovation and Technology. 
A Statistical Point of View on The ISO 10360 Standard for Coordinate Measuring System Verification
Authors: Stefano Petrò (Politecnico di Milano), Giovanni Moroni (Politecnico di Milano)
Primary area of focus / application: Metrology & measurement systems analysis
Keywords: Coordinate Measuring Systems, Metrological performance, Performance test, Statistical analysis, Operating characteristic curve
But if the uncertainty changes as the measurement task change, the uncertainty is not anymore adequate to state the performance of the CMS. The uncertainty is not anymore a parameter based on which a ranking of CMS can be proposed, nor it is possible to evaluate it once and for all by means of a calibration procedure. So it is often hard for the CMS buyer to identify the CMS fitting his requirements. Furthermore, it is difficult to check, by means of an uncertainty evaluation only, whether the CMS is performing as expected or not.
Therefore, tests have been introduced for the ``acceptance and reverification'' of CMS, completely defined in national and international standards. Probably the most diffused standard of this kind is the ISO 10360, which at present consists of nine published parts and a few more under development. In this standard a series of tests has been developed which aims at verifying whether a CMS performs as stated by either the manufacturer (acceptance test) or the user (reverification test). These procedure act as a “gono go” gauge: the test is passed or not passed, no intermediate possibility is considered. This limits the use of the test to understand what is going on with the machine, and where the machine itself can be improved.
In this paper a statistical point of view on the tests proposed by the ISO 10360 standard is proposed. The aim is to evaluate the probability the tests state the CMS is behaving according the stated performance, which is usually defined the “operating characteristic” (OC) curve of the test. This helps understanding which is the test error probability, i.e. the probability of either stating the CMS is misbehaving when it is behaving correctly or vice versa. This is useful for both the CMS manufacturer, helping him to state correctly the performance of its machine (e.g. when declaring it in the system brochure) and the user when planning the testing of its machine. In particular, this preliminary work will focus on the performance of the classical “probing error” test, which is based on the sampling on 25 points on a reference sphere. 
Monitoring Infections for Blood Donor Management
Authors: Mart Janssen (University Medical Center Utrecht), Rik Hopmans (Eindhoven University of Technology), Alessandro Di Bucchianico (Eindhoven University of Technology)
Primary area of focus / application: Process
Secondary area of focus / application: Reliability
Keywords: Process monitoring, Infectious diseases, Blood transfusion, Risk assessment
We developed various methods for detecting outliers and trends. In addition to the traditional approach of considering infection rates per time unit we also developed monitoring strategies based on the number of donations between infections. In order to compare the performance of various strategies we created a simulation model. This model mimics the whole Dutch blood supply organisation: individual donor centers, donor types, donation frequencies and infection rates. All model parameters are based on the nationwide donation and infection database from the Dutch blood supply foundation.
We will present the results comparing both the traditional and new monitoring strategies and highlight the advantages of the proposed method. 
Forecasting the French Electrical Consumption using Sparse Models and Aggregation
Authors: Mathilde Mougeot (University Paris Diderot, LPMA)
Primary area of focus / application: Modelling
Secondary area of focus / application: Mining
Keywords: Intra day load curves, Sparse functionnal regression, Forecast, Aggregation
Submitted at 28Apr2014 16:13 by MOUGEOT Mathilde
Accepted
Managing and developing the
electricity transport network is essential to provide quality
electricity on a continuous basis to all consumers.
We investigate here sparse functional regression models to forecast electricity consumption.
The consumption time series is analyzed through intraday load curves
of 24 hours sampled each 30mn. Using a non parametric model, we first show that each curve can be approximated by a sparse linear combination of functions of a dictionary composed of both specific well elaborated endogenous functions and exogenous functions provided by weather conditions \cite{MPT2012, AADA2013}.
The forecasting strategy begins with an information retrieval task.
Several sparse prediction models are provided by different 'experts'. Each expert
computes a model based on a dedicated strategy for choosing the most accurate selection
of dictionary variables and estimation of the linear combination.
The final forecast is computed using an aggregation of these different forecasters,
with exponential weights \cite{WIPFOR2014}.
We elaborate and test this method in the setting of predicting the national French intra day load curves, over a period of time of 7 years on a large data basis, including daily French electrical consumptions as well as many meteorological inputs, calendar statements and functional dictionaries.
The results on the national French intra day load curve
strongly show the benefits of using a sparse functional model to forecast the electricity consumption.