Submitted abstracts

For the ENBIS3 / ISIS3 conference

More information on this conference can be found on the events page.

 


Index by number

1. 2. 3. 4. 5. 6. 7. 9. 10. 12. 13. 14. 15. 16. 17. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 38. 40. 41. 42. 43. 44. 46. 47. 49. 50. 51. 52. 53. 54. 56. 58. 59. 60. 61. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 81. 84. 87. 88. 89. 90. 92. 93. 94. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 108. 109. 110. 111. 112. 113. 114. 115. 116. 118. 119. 121. 122. 123. 124. 125. 126.

Index by author

Abbas
Ahlemeyer-Stubbe
AlMutairi
Arvidsson
Banens
Barceló
Barone
Bates
Battisti
Berni
Bjerke
Brasini
Carot Sierra
Caulcutt
Chakhunashvili
Chernyak
Chornous
Christensen (26)
Christensen (27)
Cossari
de León (114)
de León (115)
de Mast
Di Bucchianico
Dingstad
Does (50)
Does (51)
Donev
Dorta-Guerra
Ennio Davide
Evandt (93)
Evandt (121)
Fountain
Giacalone
giudici
Göbel
GONZÀLEZ MORA (73)
GONZÀLEZ MORA (90)
Goos
Govaerts
Halevy
Henkenjohann
Herrmann
Hirotsu
Iwasaki
Jednorog
Kaplan
Kenett
Kho
Kraus
Kugiumtzis
kulahci
Kunert
Langsrud
Lee
Linsley
Logsdon
LOPEZ
Marco
Martínez Gómez
Maza
McCollin
Nicolson (124)
Nicolson (125)
Noda
Ograjensek
Palumbo
Park
Peng
Penim
Pettersson
Pievatolo
Pottel
Pozueta
Ramachandran
Ramalhoto
RATNAPARKHI
Reis
riba
Rodero
Ruggeri
Ruggoo
Schleppe
Sheng-Tsaing
Shper
Stewardson (23)
Stewardson (24)
Su (47)
Su (63)
Suzuki
Theis
Trip
Tyssedal
van Wieringen
Vermaat
Vicario
Volf
Vuchkov
WADA
Yañez
Yang
YASUI
Zarzo (33)
Zarzo (34)
Zempléni

 


3. Asymptotic Validity of Test Procedures for Linear Hypotheses in a General Linear Model and Its Application to MANOVA
Authors: Kazuo Noda (ISI) and Ono, Hideo
Keywords: UMP invariants, test sizees, powers of tests, one-way and two-way models, repeated but correlated observations
Format: presentation (Statistical modelling)
Contact: nodak@ge.meisei-u.ac.jp

We consider a general kinear model of an observable random vector Y having unknown split mean vectors,ƒÊand ƒÑ. Here error vectors are iid with unknown covariance matrix ƒ°. In Noda and Ono (2001), test procedures for linear hypotheses of ƒÊand ƒÑ are given and proved as those UMP invariant under a known structure of ƒ° if the underlying distribution of each error vector is assumed to be multivariate normal. In this presentation, we first show the asymptotic validity of the test procedures having the same forms as those in Noda and Ono (2001) in the model abovementioned. That is, we show that the sizes of the tests under the null hypotheses are asymptotically unaffected by the underlying distribution and hence the same as if the error terms were distributed with multivariate normal one. Moreover the powers of these tests are proved to be asymptotically the same as those of the UMP invariants. We next apply these results to a multivariate analysis of variance (MANOVA) of one-way and two-way models in which observations are repeated but correlated. In these MANOVA models, the conditions of the asymptotic theory aforementioned are shown to be satisfied.



1. The Sampling Strategy for Banking Survey and Estimation of Market Potencial in Ukraine
Authors: Oleksandr Chernyak (Kyiv National Taras Shevchenko University) and Galyna Chornous (Kyiv Taras Shevchenko University)
Keywords: survey sampling , stratified sampling ,market potencial
Format: presentation (Business and economics)
Contact: chernyak@univ.kiev.ua

Through different methods of survey sampling were estimate mean and total capital of commercial banks of Ukraine in 2002. There are 163 comercial banks in Ukraine. The simple random sampling ( mean per bank , ratio estimate, regression estimate ) and the stratified random sampling ( mean per bank, separate and combined ratio estimates, separate and combined regression estimates ) were used. The best result was given by stratified sampling (mean per bank ).Also by using the method of stratified random sampling marketing research of Ukrainian tobacco market was carried out, and as a result the market potential as a whole and specifically for different trademarks was estimated.



4. A methodological comparison of strategies for quality improvement
Author: Jeroen de Mast (IBIS UvA)
Keywords: Quality improvement; strategy; Taguchi; Shainin; Six Sigma
Format: presentation (Six Sigma and quality improvement)
Contact: jdemast@science.uva.nl

Quality improvement is understood by Juran to be the systematic pursuit of improvement opportunities in production processes. Several methodologies are proposed in literature for quality improvement projects. Three of these methodologies – Taguchi’s methods, the Shainin System and Six Sigma's Breakthrough Cookbook – are compared. The comparison is facilitated by a methodological framework for quality improvement. The methodological weaknesses and strong points of each strategy are highlighted. The analysis shows that the Shainin System focuses mainly on the identification of the root cause of problems. Both Taguchi’s methods and Six Sigma's Breakthrough Cookbook exploit statistical modelling techniques. The Six Sigma programme is the most complete strategy of the three.



5. Internet Subscriber Study : A Data Source for Policy and Market Intelligent System
Authors: Ramasamy Ramachandran (MIMOS Berhad / National Information Technology Council) and Asha Ratina Pandi
Keywords: Internet subscriber;
Format: presentation (Business and economics)
Contact: ramachan@mimos.my

This paper investigates on the viability of generating statistical information in a regular manner as a by-product of the Internet Service Provision (ISP) system. Being an administrative record like the vital events system for births and deaths registration, the ISP could able to provide regular profile statistics about the new and registered Internet applicants. The information that can be collected include demography, social, economic and Information Communication Technology (ICT) usage characteristics. Currently, there are six licensed ISPs in the country. However, the study planned to cover only JARING as this ISP was the first and one of the leading commercial service provider in the country. Nowadays, the policy formulators and development practitioners are gravely concerned about Internet growth. In particular, the Internet is phenomenal, ubiquitous and pervasive in all spheres of life. It brings about unprecedented proliferation in access to global information, communication, knowledge, entertainment and networking (ICKEN). As a result the social, economic and governance structures and processes are under going changes. The ways individuals, families, communities, societies and organizations communicate and interact socially, perform business and public services transactions and delivery, network and learn embracing changes, in some instances drastically. Consequently, the firm level productivity, national competitiveness, efficiency and effectiveness of delivery of services, social cohesiveness and human relationships are affected and pose challenges to sustainable development and enhancing quality of life. Recognizing the foregoing impact and effects, the government of Malaysia in its National Information Technology Agenda (NITA) has explicitly raised the concern for new statistics. Towards this end, as an initial step the MIMOS Berhad initiated Internet Subscriber Study (ISS) At Jaring Counter nationwide to collect and collate profile about new Internet applicants.



6. A DFSS(Design for Six Sigma) procedure and its applications
Authors: Sung Park (Seoul National University) and Sung H. Park (Seoul National University)
Keywords: Six Sigma, Design for Six Sigma
Format: presentation (Six Sigma and quality improvement)
Contact: parksh@plaza.snu.ac.kr

R&D Six Sigma approach is called DFSS(Design for Six Sigma). A desirable approach for DFSS is suggested and its applications are illustrated. Many industrial statistical methods such as DOE, SPC, Robust design and reliability are used in DFSS, and its power of DFSS is studied.



7. The Balance Between Strict rules and Judgmental Discretion in Organizations
Authors: Avner Halevy (University of Haifa) and Eitan Naveh
Keywords: Quality management, organizational - procedures, discretion, improvisation
Format: presentation (Business and economics)
Contact: ahalevy@univ.haifa.ac.il

We address the question of the appropriate balance between strict hierarchical structure and rules, and flexibility and judgmental discretion by managers and workers in an organization. The notions of flexibility, loose structure, popular judgment, improvisation and discretion will be discussed, using examples from different organizational environments. The discussion will lead to the argument that the term "improvisation" has not been properly and consistently discussed in the literature in the organizational context. We will suggest a definition of judgmental discretion in an organization and raise several questions: finding the correct dosage of structure and flexibility desirable for a given organization; the ability to maintain the desirable balance; the problem of successfully empowering personnel and endowing them with the knowledge necessary, so that discretion and judgment are effectively implemented in their course of work.



9. A Discussion of the Interpretation of Dispersion Effects
Authors: Martin Arvidsson (Dep of Quality Sciences, Chalmers University of Technology) and Ida Gremyr (Dep Quality Sciences, Chalmers)
Keywords: dispersion effects, robust design methodology, control factors, noise factors
Format: presentation (Design of experiments)
Contact: mararv@mot.chalmers.se

Design of experiments is often used within robust design methodology to identify interactions between control and noise factors. Interactions between control factors and noise factors, possible to control and vary systematically in the experiment, can be modelled to find solutions to robust design problems. Such solutions use interactions to reduce the influence of noise factors on certain performance characteristics. However, it is not possible to control all noise factors in an experiment, either due to their nature or due to the simple fact that they are unknown. Consequently, when modelling experiments the response error can be split in an experimental error and an error term representing the effect of random noise. In identification of dispersion effects this latter term is of interest rather than the experimental error. The purpose of this paper is to discuss the interpretation of dispersion effects. The main argument is to view dispersion effects as manifested interactions between control factors and noise factors not controlled in the experiment. This interpretation has consequences on the modelling of the responses; it makes it sensible to model the response variance using the same assumptions as when modeling the response mean. A special case considered is that of split-plot experiments, as this type of experiments are suitable in robust design methodology. Viewing dispersion effects as interactions makes the models traditionally used for split-plot experiments insufficient. Instead a new way of modeling split-plot experiments is suggested, one that takes into account the view of dispersion effects as interactions. This new and better-suited model should be useful within robust design methodology.



10. The design of mixture experiments in blocks
Authors: Peter Goos (Katholieke Universiteit Leuven) and Alexander N. Donev
Keywords: fixed and random blocks, minimum support design, mixture experiment, orthogonal blocking, qualitative variables
Format: presentation (Design of experiments)
Contact: peter.goos@econ.kuleuven.ac.be

So far, the optimal design of blocked experiments involving mixture components has received scant attention. In this talk, an easy method to construct efficient minimum support mixture experiments in the presence of fixed and/or random blocks is presented. The method extends Donev's (1989) design construction method and can be used when qualitative variables are involved in the mixture experiment as well. It is also shown that orthogonally blocked mixture experiments (see e.g. Draper et al. 1993, Prescott et al. 1993, 1997 and Prescott 2000) are highly inefficient compared to D-optimal designs.



13. Design optimality in the presence of errors in factor levels
Author: Alexander Donev (University of Sheffield)
Keywords: D-optimality, design region, errors in variables, lost observations, random experiments
Format: presentation (Design of experiments)
Contact: a.n.donev@sheffield.ac.uk

Setting the factor levels prescribed by an experimental design is often done with errors. These are usually ignored, as they are usually considered negligible but also because there is little available advice on how to chose an experimental design that is robust against such errors. We consider the case when ignoring the errors would not be appropriate and when the actual design can be recorded exactly, hence there is no EIV estimation issue. We employ Bayesian generalized criteria of D-optimality (page 214, Atkinson and Donev, 1992) to search for designs and argue that the criterion used by Pronzato (1998, 2002) which is based on maximizing the expectation of the determinant of the information matrix, could be misleading. Instead maximizing the determinant of the covariance matrix should be used. As the criteria of optimality are difficult to evaluate analytically, we illustrate how prior information about the distribution of the errors in the factor levels can be used to obtain the empirical distribution of both the designs of interest and the corresponding criteria of optimality. Standard designs such as the two-level factorial designs and the three-level factorial designs are used as examples because of their extensive use in practice. We also provide considerations of how the design region should be chosen when errors in setting the factor levels might lead to loss of observations. In this case a compromise is sought between using a large design region and the loss in efficiency of the estimates of the model parameters due to a possible loss of observations.



12. Modelling and Optimisation of Quality and Costs in Hearth Bread Production
Authors: Gunvor Dingstad (MATFORSK) and Bjørn-Helge Mevik and Ellen Mosleth Færgestad
Keywords: hearth bread, wheat quality, protein quality, non-linear optimisation, least cost formulations
Format: presentation (Statistical modelling)
Contact: gunvor.dingstad@matforsk.no

The influence of flour properties and process parameters on the final loaf characteristics of hearth bread is not yet fully understood. By adjusting the process settings according to the wheat flour qualities, hearth bread within acceptable limits can be made of very different flours. In addition flour and production costs may be taken into account, to find least cost recipes giving high quality loaf. In the present study hearth bread were made by varying three wheat flour qualities and two baking process parameters. A mixture–process design was constructed and 99 hearth bread batches were made. Quality and cost models were made, and optimised by a non-linear optimisation program.



14. Management by fact
Author: Roland Caulcutt (Caulcutt Associates)
Keywords: Management, Decisions, Data, Variation
Format: workshop (Six Sigma and quality improvement)
Contact: rolandcaulcutt@compuserve.com

Some organisations claim to have a management-by-fact culture. One assumes that, within these companies, management decisions are based on data and the logic of these decisions is clear for all to see. The people who work in such a culture will enjoy the openness and trust that this way of working must promote. Those who work in other companies may wonder if such claims can be justified, how such a culture can be created and what it is like to work in a management-by-fact environment? This highly participative, one-hour workshop will offer you the opportunity to discover how effectively you can manage by fact. It may demonstrate that management-by-fact is much more difficult than many people suppose.



15. Set up of a Statistical Process Control plan to monitor a solar cells production process in the photovoltaic industry.
Authors: govaerts bernadette (Institut de Statistique, Université Catholique de Louvain) and Christophe Allebé (IMEC-Leuven-Belgium)
Keywords: Statistical process control, control charts, ARL, solar cells
Format: presentation (Process modelling and control)
Contact: govaerts@stat.ucl.ac.be

Photovoltaic is a growing indutrial area and, like everywhere else, it is more and more concerned by the daily monitoring of its production stability and quality of its final product. This talk presents the methodology used to set up a statistical process control (SPC) plan to follow a semi-industrial solar cell process developed at IMEC in Belgium. A typical industrial solar cell process contains several steps: texturisation, POCL3 diffusion, parasitic junction removal, SiNx:H deposition, screen printing of the contacts and firing. For each of these steps, a representative parameter can be chosen to check whether the related part of the process has been well conducted (eg. sheet resistivity for POCl3 diffusion). A particularity of this process and the resulting data is, first, that the crystalline silicone wafers are processed in batches of 15 pieces and, second, that measurement may be taken at several position of a select wafer. The SPC plan proposes to follow each variable of interest with three simultaneous charts : (1) a mean chart, to follow the variability between batches, (2) a standardised mean chart to follow the variability between wafers in each batch and (3) a range chart to monitor the differences between positions within each wafer. Adapted formulae have been developed to derive adequate control limits for these three charts on the basis of the different variance components of the process. Moreover, an important question is to decide the number of measurements needed to ensure a good sensitivity of the chart to possible process shifts. A sampling plan is defined by the number of wafers monitored in each batch and the number of positions measured on each wafer. Several analytical calculations and computer simulations were done to derive tables to link the average number of batches needed to detect a process drift (ARL) as a function of the amplitude and the type of drift encountered. These tables show clearly the link between the type of drift and the chart mostly susceptible to detect it. This SPC plan is now in pass of being set up in an automatic computerized system on the IMEC process.



16. THE EFFECTS OF IMPRECISE MEASUREMENT ON CONTROLLING TWO DEPENDENT PROCESSES
Authors: Su-Fen Yang (National Chengchi University) and Han-Wei Ho
Keywords: Imprecise measurement; dependent processes; ARL; cause-selecting control chart.
Format: presentation (Process modelling and control)
Contact: yang@nccu.edu.tw

The presence of imprecise measurement may seriously affect the efficiency of process control and production cost. For the two dependent processes, the quality variable in first process may influence the quality variable in the second process. To effectively monitor and distinguish the state of the two dependent processes, EWMA control chart and cause-selecting control chart are proposed. The EWMA control chart based on the original observations of the first process is constructed to monitor the small shift in the first-process mean. The cause-selecting control chart or the EWMA control chart based on the residuals in the second process is constructed to monitor the small shift in the second-process mean. The effects of imprecise measurement on the performance of the two proposed control charts are examined for the case where the mean of each process may be changed by the occurred assignable cause. Application of the proposed control charts is illustrated through an example. Numerical examples show the effects of imprecise measurement on the performance of the proposed control charts. The performance of the proposed control charts is measured by average run length. It shows that the imprecision measurement may seriously affect the ability of the proposed control charts to detect process disturbances quickly compared to the proposed control charts excluding measurement errors and a Hotelling T2 control chart, respectively.



17. Statistical Structuring, the dominant competence in Statistical Consultancy
Author: Paul Banens (CQM)
Keywords: Statistical Consultancy
Format: presentation (Statistical consulting)
Contact: banens@cqm.nl

Statistical Consultancy is, by nature, based on the two fundamental disciplines of applied statistics: ‘data analysis’ and ‘probability calculus’. But too often a statistical consultant, especially in industry, reviewing a finished project is confused. Although his/her contribution is recognized as a break-through added value, he/she probably only used simple statistical methods. What made the contribution so valuable for the customer? I believe the real added value of a statistical consultant is his power to structure a problem (reality) in a specific way. It distinguishes our view on problems from all other disciplines. I would call it ‘Statistical Structuring’. In this talk I like to explain what I mean by ‘Statistical Structuring’ using some examples.



123. PERCEPTIONS OF SERVICE QUALITY FORMED BY MEMBERS OF A RETAILER LOYALTY PROGRAMME:
Authors: Irena Ograjensek (University of Ljubljana, Faculty of Economics) and Vesna Zabkar (Faculty of Economics)
Keywords: service quality, loyalty programme, clustering
Format: presentation (Business and economics)
Contact: irena.ograjensek@uni-lj.si

Ever since the advent of the smart loyalty cards, loyalty programmes have been transcending their traditional role as creators of exit barriers by transforming themselves into facilitators of customer data collection. Apart from demographic and socio-economic data, behavioural (transaction) as well as psychographic (survey) data are being collected for known entities (individual customers enrolled into a loyalty programme). An analysis based on a combination of these three types of data can be of immense value for service providers striving to improve service quality for different customer segments. The starting point for such endeavours can be the SERVQUAL model developed by Parasuraman et al. (1985, 1988, 1994), which defines five dimensions of service quality. Although adopted as the standard by many researchers in the field of service quality, the model has been subject to constant re-examination and criticism. These are due to the fact that the five-factor solution cannot be generalised (the number of distinct service quality dimension found in replication studies conducted in different service industries varies from one to nine). Our study does not classify as another attempt to prove the SERVQUAL generalisability. Using this model as a starting point, the study primarily focuses on the issue of loyalty programme members’ segmentation from the viewpoint of their perceptions of service quality. A sample of 201 retailer’s loyalty programme members is used in the analysis. Three distinct service quality dimensions (personnel appearance, empathy and assurance) serve as input in the clustering process. For cluster profiling, selected demographic, socio-economic and transaction variables are used. Apart from methodological issues, managerial implications of findings are discussed in detail.



20. Statistical properties of internet-based market research surveys
Authors: Oren Kaplan (The College of Management, Israel; KPA Ltd.) and Joseph Raanan, Ron Kenett (All from KPA Ltd. and The College of Management, Israel)
Keywords: Surveys, Internet, Sampling, Market-Research
Format: presentation (Business and economics)
Contact: oren@statistica.co.il

The Internet has become widely available all over the world in a very short period of time. Properties of traditional methods of surveys (face to face, mail and telephone interviews) are well known. Those methods are widely used for academic and commercial research purposes. However, the situation is very different concerning the use of Internet based surveys. There is a lack of knowledge about their statistical properties and their reliability. The presentation will include a review of the survey methods of the new technologies including their advantages and disadvantages as well as the statistical challenges that market researchers are facing when they wish to use those methods.



21. Optimal Variable EWMA Controller
Author: Tseng Sheng-Tsaing (National Tsing-Hua University)
Keywords: Run by run control; Variable EWMA controller
Format: presentation (Process modelling and control)
Contact: sttseng@stat.nthu.edu.tw

The exponentially weighted moving average (EWMA) feedback controller (with a fixed discount factor) is a popular run by run control scheme which primarily uses data from past process runs to adjust settings for the next run. Although the EWMA controller with a small discount factor can guarantee a long-term stability (under fairly regular conditions), it usually requires a moderately large number of runs to bring the output of a process to its target. This is impractical for process with small batches. The reason is that the output deviations are usually very large at the beginning of the first few runs and, as a result, the output may be out of process specifications. In order to reduce a possibly high rework rate, we propose a variable discount factor to tackle the problem. We state the main results in which the stability conditions and the optimal variable discount factor of the proposed EWMA controller are derived. An example is given to demonstrate the performance. Moreover, a heuristic is proposed to simplify the computation of the variable discount factor. It is seen that the proposed method is easy to implement and provides a good approximation to the optimal variable discount factor.



22. A Review of the Intensity Function of the Non-Homogeneous Poisson Process
Author: Christopher McCollin (The Nottingham Trent University)
Keywords: NHPP, intensity function, hazard rate
Format: presentation (Statistical modelling)
Contact: christopher.mccollin@ntu.ac.uk

The Non-homogeneous Poisson Process (NHPP) is reviewed and some results regarding the intensity function are presented. The well-known intensity functions are listed in conjunction with some not so well known ones to bring together the literature in line with non-repairable items. This review will also introduce some non-repairable distributions based on the intensity functions. A discussion ensues which provides some simple graphical and estimation procedures for the models.



23. Aspects of Working with SMEs: Contacts, Funding and Working methods.
Authors: Dave Stewardson (ISRU) and Shirley Coleman, Matt Linsley, Fiona Gray
Keywords: SMEs, Subsidies, Approaches, Team Working, Graphics
Format: presentation (Statistical consulting)
Contact: d.j.stewardson@ncl.ac.uk

This paper outlines some of the aspects and issues of working with smaller manufacturing companies. The findings are based on the work of a large unit based in the North East of England over several years, much of which concentrated on helping SMEs (Small to Medium Enterprises) to improve their processes. We examine how best to make contact with prospective clients, how to obtain funding to support this work, aspects of team and group working, and how best to present results. Three typical cases are followed, warts and all, by way of illustration. It is hoped that this may help statistical practitioner colleagues to better target effort when dealing with smaller companies and the people who work within them.



24. A case study in multiple responses, designed experiments for developing a new plastics manufacturing process.
Authors: Dave Stewardson (ISRU) and Shirley Coleman, Matt Linsley
Keywords: DoE, Multiple responses, Contradiction, Sequential experimentation
Format: presentation (Design of experiments)
Contact: d.j.stewardson@ncl.ac.uk

This paper covers a series of designed experiments conducted to help develop a new plastics manufacturing process. This involved a number of sequential processes designed in order to make a plastics product with pre-defined specifications for use in the health industry. This involved eleven responses; 6 dimensional, 5 aesthetic, some that showed conflicting results over important control factors. The experimental work was sequential with 14 control variables and a number of other uncontrolled factors that could be monitored. The case shows how, in the early stages of the process development, required changes to the process were identified and made, and how it was established that some uncontrolled factors were particularly important. The use of a simple spreadsheet tool was used to identify combined optimal responses under changing input factors. The case shows some important practical issues that are often encountered in DoE work, and shows how event the most complex of problem can be handled by use of these powerful techniques.



25. Statistical Flaws in Excel
Author: Hans Pottel (Innogenetics)
Keywords: Statistics, Microsoft Excel, flaws
Format: presentation (Reliability and safety)
Contact: Hans_Pottel@innogenetics.com

The ¡§Commercially Off-The-Shelf¡¨ (COTS) software package Microsoft Excel is very widespread, for various reasons: „h Its integration within the Microsoft Office suite „h The large range of intrinsic functions available „h The convenience of its graphical user-interface „h Its general usability enabling results to be generated quickly It is accepted that spreadsheets are a useful and popular tool for processing and presenting data. In fact, Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the previously mentioned advantages and the fact that the program is often being packaged with new computers, which increases its easy availability, naturally encourages its use for statistical analysis. Many statisticians find this unfortunate, since Excel is clearly not a statistical package. There is no doubt about that, and Excel has never claimed to be one. But, one should face the facts that due to its easy availability many people, including professional statisticians, use Excel, even on a daily basis, for quick and easy statistical calculations. Therefore, it is important to know the flaws in Excel. This presentation gives an overview of known statistical flaws in Excel, based on the literature, the internet and on my own experience.



26. Statistics as a mediator between production and marketing: A case study
Author: Antje Christensen (Novo Nordisk A/S)
Keywords: drug stability, regression analysis, capability
Format: presentation (Statistical modelling)
Contact: antc@novonordisk.com

Pharmaceuticals degrade over their time of shelf life at the pharmacy and at the pati-ent’s. To account for this, separate specification limits are set for a drug to meet at all times during its shelf life and at the time of release to the market. The drug NovoSeven® is stored cool to prevent among others the formation of dimers. Marketing had a wish to extend the allowed storage conditions by several weeks at room temperature. Statistical analysis of stability studies allowed to predict how the product would react to the changed conditions. Based on this prediction, a maximal release limit was calculated for each release parameter, that allowed the product to stay within the registered shelf life limits throughout its shelf life when subject to the changed conditions. Calculations were made for several scenarios with different storage times at room temperatures. The maximal release limits were evaluated in a capability analysis of production data. The result of the analysis formed the basis for the business decision about introducing the extended storage conditions to the market.



27. Tom Lang's puzzle
Author: Antje Christensen (Novo Nordisk A/S)
Keywords: communication barriers
Format: workshop (Statistical consulting)
Contact: antc@novonordisk.com

Conducting a valid statistical analysis is one thing, communicating the results to the client is quite another. Communication skills and knowledge of communication barriers are therefore important for a statistical consultant. Medical writer and communication teacher Tom Lang has devised a laboratory example of the communication process in technical writing. It involves a puzzle and requires participation from the audience. The example will be presented and the outcome discussed. The points addressed include tacit assumptions, anticipation of the reader’s needs for information, and design for ease of communication. preferred workshop duration: 60 min



28. Spanish Customer Satisfaction Indices for Automobile models: A marketing Application
Authors: CRISTINA LOPEZ (University of Economics (Basque Country)) and Karmele Fernandez ((etpfeagk@bs.hu.es) University of the Basque) Country) and Petr Mariel ((etpmachp@bs.ehu.es)University of the Basque Country)
Keywords: Structural Equation Modelling, Factor Analysis and Satisfaction Indices
Format: presentation (Statistical modelling)
Contact: edplocac@bs.ehu.es

In this paper, we present a new theoretical representation of the Consumer Satisfaction Index (CSI) based on structural equation modelling (SEM). We propose an econometric model based on a set of causal equations and apply a Factor Analysis (FA). We use panel data collected by an automotive magazine to apply our approach and assess the applicability in the field of marketing by formulating a competitive strategy in the Spanish automobile industry. \ \ The basic structure of the CSI is based upon well established theories and approaches to customer satisfaction. The structure based upon these theories consists of a number of latent factors, each of which is operationalised by multiple measures. We will discuss that CSI is a global evaluation constructed on the basis of its particular components evaluations. Apart from that, using panel data, this work tries to correct for the bias produced by the particular method of calculus employed by the magazine. The statistical source is a database published in the magazine Autopista. The magazine gathered information about the satisfaction on 130 cars covered and drew up an index for 25 variables which indicates the partial satisfaction of all the customers for each model of car. The magazine published the responses in twelve consecutive periods which are processed in a cumulated way. The indices for each period are calculated as an average of the previous indices and the present one. Hence, this method used by the magazine produces a bias. The idea behind the model built up in our study is that there may be interdependence between the satisfaction produced by the attributes of different car models. This allows us to represent a vertical structure with a hierarchy of attributes as a First Order Factor Analysis (FOFA) model with an unique factor and as a Second Order Factor Analysis (SOFA) model. The SOFA model provides a more explicit information about the existing substructures. Thus, SOFA model should achieve higher convergent validity than the FOFA model.



29. Two-stage model-robust and model-sensitive designs
Authors: Arvind Ruggoo (Katholieke Universiteit Leuven) and Martina Vandebroek (Katholieke Universiteit Leuven)
Keywords: Bayesian two-stage procedures, prior probabilities, posterior probabilities, bias, lack of fit
Format: presentation (Design of experiments)
Contact: arvind.ruggoo@econ.kuleuven.ac.be

D-optimality has been criticized for being too dependent on the assumed model and for making no provisions for allowing the fit of higher order terms in case model inadequacy is diagnosed or for reducing the bias error in case departures from the assumed model occurs. To address this problem we propose within the Bayesian paradigm, a two-stage design strategy. We assume that the true model is composed of a primary model – the one that will eventually be estimated – plus some potential terms. In the first stage a design criterion that facilitates improvement of the proposed model by detecting lack of fit is used. The design in the second stage attempts to minimize bias with respect to potential terms and is obtained using a weighted criterion, with weights in the second stage being posterior model probabilities computed from first stage data. The two-stage approach developed produce designs with significantly smaller bias errors compared to standard unique stage designs. They also improve coverage over the factor space and still have very good variance properties of the primary model.



30. Measurement system analysis for ordinal data
Authors: Wessel van Wieringen (IBIS UvA) and Jeroen de Mast
Keywords: Intraclass correlation coefficient; Gauge R&R study; Categorical data; Kappa method.
Format: presentation (Statistical modelling)
Contact: wwiering@science.uva.nl

The precision of a measurement system is the consistency across multiple measurements of the same object. The evaluation of the precision of measurement systems that measure on a bounded ordinal scale is studied. A bounded ordinal scale consists of a finite number of categories that have a specific order. Based on an inventory of methods for the evaluation of precision for other types of measurement scales, alternative approaches are proposed: o.a. an approach based on a latent variable model which is a variant of the intraclass correlation method, and a nonparametric one. The approaches are illustrated by means of examples.



31. DESIGN AND ANALYSIS OF EXPERIMENTS FOR PROCESSES DEPENDING ON TIME
Authors: Ivan Vuchkov (University of Chemical Technology and Metallurgy) and V. Tzotchev
Keywords: Design, models, parameter-dependent process
Format: presentation (Statistical modelling)
Contact: vuchkov@uctm.edu

Often technological processes depend on several variables, one of which is time. For example a performance characteristic in a chemical reaction depends on the initial concentrations of reagents. A question that is frequently asked is how the process development in time depends on the initial conditions of the reaction? In this paper we present models of process dynamics with coefficients depending on some external factors. Design of experiments and parameter estimation are considered for these models. Examples are given. One of them presents a model of rheological properties of a rubber mixture used for production of truck tires. They depend on the proportions of mixture components. A D-optimal design is used and a model is given, which presents the viscosity of rubber mixture as function of time and the proportions of mixture components. Optimization procedure is used to find optimal component proportions that provide a desired rheological curve. This makes it possible to shorten the vulcanization time while keeping the properties of the rubber mixture within some specifications. A second example presents a model of a chemical reaction depending on the initial conditions of reagents. Auto regression model with coefficients depending on the initial conditions of the reaction is used as a model of the process.



32. DESIGN AND ANALYSIS OF STORING EXPERIMENTS. A CASE STUDY
Authors: Frøydis Bjerke (MATFORSK) and Are Halvor Aastveit (Agricultural University of Norway); Walter W. Stroup (University of Nebraska); Bente Kirkhus (Mills DA); Tormod Næs (MATFORSK)
Keywords: empirical modelling, response surface methods, linear mixed models, repeated measures, robustness, product development, viscosity, mayonnaise production
Format: presentation (Statistical modelling)
Contact: froydis.bjerke@matforsk.no

In order to achieve robust and stable food products of desired quality and characteristics, all stages of the food production process, including storage conditions, should be considered during product development projects. The paper describes a multi-stage production development project on low-fat mayonnaise, where experimental design was used to set up a pilot plant study involving ingredient factors, processing factors and storage factors, and their effect on mayonnaise viscosity. The paper discusses three alternative empirical modelling approaches to analyse the data – namely a simple graphics approach, a robustness approach and a mixed models approach – considering their multi-stratum (split-plot) structure, and repeated measurements of each sub-sample. In the case study, all information relevant for business decisions was achieved through the combination of graphical analysis and the robustness approach. This information could also be extracted by a practitioner, while the mixed model analysis clearly requires a graduate statistician. In order to obtain valid and useful information for the practitioner in an efficient way, the authors believe that, usually, the first two approaches would be sufficient. The more complex mixed model strategy might be advisable, if a deeper understanding is required or desired.



33. Quality improvement of a dehydration batch process with multivariate statistical methods: PLS versus Hierarchical PCR
Authors: Manuel Zarzo (Polytechnic University of Valencia - Dept. Applied Statistics) and Alberto Ferrer (Polytechnic University of Valencia - Dept. Applied Statistics)
Keywords: MSPC, batch, PLS, PCR
Format: presentation (Data mining)
Contact: mazarcas@eio.upv.es

Batch processes are difficult to control, because usually the duration of the different stages is not constant from batch to batch and this affects the final product quality in a way that is not always well established. The industrial dehydration of a polymer has been studied. It is a vacuum evaporation that consists of three batch stages with unequal time duration from batch to batch. To improve the quality of the product, the residual water content should be minimised, but in 15% of the batches this content exceeds the tolerance limit and the causes are unknown. To diagnose the critical stages that require a more accurate control, multivariate predictive models can be built between the residual water content measured off line and the process variables acquired on line by means of 10 electronic sensors. The temporary trajectory of these process variables has been arranged into a matrix of 2700 aligned variables by 18 batches. By using PLS (Partial Least Squares) on this matrix, one significant component has been obtained. Another approach used is Hierarchical PCR, that consists of two stages. First, one Principal Component Analysis has been carried out with the temporary trajectory from every sensor and every stage. Afterwards, a multiple linear step-wise regression has been applied. Both methods have identified the variables more correlated with the final water content. The most important is the duration of the first stage of the dehydration. A predictive model has been proposed in order to detect on line batches out of normal operating conditions.



34. Statistical analysis of the final quality parameters in a chemical batch process as a first step for quality improvement
Authors: Manuel Zarzo (Polytechnic University of Valencia - Dept. Applied Statistics) and Alberto Ferrer (Polytechnic University of Valencia - Dept. Applied Statistics)
Keywords: repeatability, control charts
Format: poster (Six Sigma and quality improvement)
Contact: mazarcas@eio.upv.es

Batch chemical processes are difficult to control, and the critical points that require a better control to avoid a wrong final product quality are often unknown. Data from an industrial production of the polymer PPOX (polypropylene oxide) have been supplied by a chemical company that wants to avoid batches out of specifications. This batch process that lasts about 24 hours, consists of 4 stages and 52 process variables are controlled on line. At the end of the process, two quality parameters are analysed off line in laboratory: the hydroxyl index and the residual water content. A repeatability study conducted has shown that the uncertainty due to the analytical determination of both parameters is acceptable, compared with the specification limits. A statistical study has been carried out on these parameters from 86 batches in order to analyse quality variability. The hydroxyl index follows a normal distribution, and two populations with a different mean have been detected. By using quality control charts (Shewhart and CUSUM), several changes of the trend have been observed, revealing that this parameter is statistically out of control. The residual water control follows a log-normal distribution and is statistically under control, as no trends have appeared. The information obtained from the quality parameters is the starting point for subsequent analysis on the process variables, in order to identify the key process variables affecting the variability of the quality parameters, and the ones responsible for the trends detected in the hydroxyl index.



35. Statistics, Dynamics and Quality -- Improving BTA-deep-hole drilling
Authors: Winfried Theis (SFB 475, University of Dortmund) and Claus Weihs (Dep. Statistics, University Dortmund), Oliver Webber (SFB 475, ISF, University Dortmund)
Keywords: statistical modeling, Quality improvement, process control, design of experiments
Format: presentation (Process modelling and control)
Contact: theis@statistik.uni-dortmund.de

In this talk we will present, how statistical experimental design, time series analysis and non-linear dynamic models have been applied to gain a deeper insight into BTA-deep-hole drilling process. BTA-deep-hole drilling is used to produce long holes of a length to diameter ratio larger than 5. This process normally produces holes of high quality with regards to straightness, smoothness of the boring walls and roundness. However, two dynamic disturbances, chatter and spiralling, are observed sometimes in this process. While chatter results mainly in increased wear of the cutting edges of the tool but may also damage the boring walls, spiralling damages the workpiece severely. In our study we applied experimental design to gain insight into the connection between process parameters and quality measures like roughness and roundness and observed some time series during these experiments. This was a very helpful approach because we observed all kinds of different dynamic disturbances. In this talk we will focus on chatter which is dominated by a few frequencies. Two models were proposed to describe the variation of the amplitudes of these frequencies, on the one hand a stochastic differential equation and on the other hand a descriptive model based on piecewise periodograms. In the latter model the knowledge of the underlying experimental design was used to distinguish between effects of the assembly of the machine and effects of the dynamics and stochastic influences. Because this model is completely data-driven it can be used as a starting point for the development of a suitable dynamic model of a process.



36. STATISTICAL THINKING WITHOUT STATISTICS: WHY WE NEED IT AND WHAT WE SHOULD DO?
Authors: Vladimir Shper (Russian Electrotechnical State Institute) and Yu. Adler
Keywords: Statistics, thinking, education, consulting
Format: presentation (Statistical consulting)
Contact: shper@vei.ru

This paper prolongs our meditations on Statistical Thinking (ST) started last year at the 2nd ENBIS conference. One of our conclusions at that time was formulated like that: "Mankind need statistical thinking without statistics". This time our attention is focused on the following problems: •\ How ST is related with other types of "thinking"? •\ Why people have so much troubles with ST? •\ What we should do in order to change this situation? •\ What is necessary for implementation of these changes? •\ What is the role of statisticians in these changes? •\ What our statistical community can do to facilitate this process? The main conclusion of this paper is as follows: We need to revise all systems of education and learning from early childhood till adult retraining, and at all stages of this process we must teach people to understand ST and use this understanding for improving our life.



38. A statistical texture analysis method for material defects detection
Authors: Petr Volf (Institute of Information Theory and Automation) and Ales Linka (Technical University in Liberec)
Keywords: statistics, quality control, material defects detection, texture analysis, signal analysis
Format: presentation (Statistical modelling)
Contact: volf@utia.cas.cz

Present contribution proposes a method for the material surface defect detection. The approach utilizes statistical procedures of pattern analysis combined with statistical techniques of signal processing. The first stage of the method consists in an off-line (learning) analysis and uses the principles of image segmentation or thresholding, searching simultaneously for the optimal selection of levels as well as for their optimal number. Random Markov fields models and appropriate (e.g. MCMC) technique of their optimization are used. Simultaneously, both original (after basic preprocessing) and segmented image are characterized by a set of parameters (e.g. of local Markov field model, local statistics). The series of these parameters obtained by sequential analysis of material surface (examined in a moving window manner)indicate the regions of actual changes of model characteristics, the regions of potential defects. Such an analysis can be performed rather quickly, in an on-line manner. On the other hand, each type of defect needs its proper combination of texture analysis and statistical series analysis procedures. Hence, a catalogue of typical defect patterns is developed and used as a learning database. the application deals with different types of textile materials and their defects.



124. Statistics in manufacturing problem solving
Authors: Robin Nicolson (GlaxoSmithKline) and Nicola Fletcher, James Thomson, Yousuf Rafique
Keywords: Design of Experiments, Statistical Investigation, Manufacturing process data
Format: poster (Six Sigma and quality improvement)
Contact: rn83772@gsk.com

Often the role of the statistician is to give focus to data analysis in a crisis environment. One such example is presented, illustrating the use of SPC & DoE tools to bring a substantial benefit to the business. The process was taken from non-robust to robust & capable in a period of 6 months. The benefit of this statistical input to this project has been recognised at both site & European level. The role of the statistics team on site is also discussed.



125. The Statistician in a Manufacturing Environment - what keeps us in successful!
Author: Robin Nicolson (GlaxoSmithKline)
Keywords: Six Sigma, Culture, Consultancy
Format: presentation (Six Sigma and quality improvement)
Contact: rn83772@gsk.com

Quality and compliance are keywords in the manufacturing environment, both in regulated & non-regulated industries. The statistician's role is to aid in the analysis and understanding of the process data to ensure these we can increase the odds of making the correct decision. Large quantities of data are collated from the manufacturing process and laboratory testing (& non-manufacturing areas). Statistical tools are utilised in a number of fields such as investigations, optimisation and process modeling. The success of "six sigma" across the industrial landscape has changed the way that the statistician is viewed, or has it? The changing role of industrial statistics will be discussed along with areas of application. What do we need to remain successful and add value to our business.



40. Control charts in semiconductor manufacturing
Authors: Ramun Kho (Philips Semiconductors) and Joop Verwijst
Keywords: SPC, large scale application, sources of variation
Format: presentation (Process modelling and control)
Contact: ramun.kho@philips.com

Integrated Circuits (IC’s) exist of thousands of transistors connected with each other by metal lines with a width of less than one micron. It may be clear that control of variation is essential. Moreover, the whole process flow takes several weeks. To prevent scrap production a timely detection of nonconformance is important. Therefore, control charts are used extensively at Philips Semiconductors. Semiconductor manufacturing involves hundreds of process steps. For each step a specialized team of engineers, technicians and operators is responsible for output, maintenance and a stable performance. They are also responsible for the design and maintenance of the control charts they use. Several IC’s (100 - 10000) are made on a so-called wafer, and several wafers (25 - 50) are grouped in one lot. The way IC’s are made results in at least three sources of variation: (1) the between lot variation, (2) the between wafer (within lot) variation and (3) the between position (within wafer) variation. The lot effect is always stochastic. The wafer and the position effects may be fixed or stochastic, depending on the process step. Given the various steps, teams and circumstances, the applied (consulting) statistician needs to weigh simplicity, standardization and requirements from quality systems on the one hand with profoundness and precision on the other hand. The presentation will discuss the use of (a) standard Shewhart control charts, (b) control charts based on modeling and estimation of the variance components and (c) the approach of the Industrial Statistics group of Philips Semiconductors Nijmegen.



41. An Entropy Guided Approach to One Run at a Time Experimentation
Author: John Tyssedal (Department of Mathematical Sciences)
Keywords: Efficient designs; entropy; two-level designs
Format: presentation (Design of experiments)
Contact: tyssedal@stat.ntnu.no

In this paper it is taken a closer look into the possibility of doing one run at a time experimentation where for each run the information from the previous ones will be taken into account in designing the next. Such a procedure will necessarily abandon the possibility of blocking and randomization. These actions performed in order to reduce error and validate inferential procedures are, however, not equally important or even desirable in all experimental situations. For example in computer experimentation with deterministic outcome given the input, as is normally the case, blocking and randomization is actually superfluous. A patient receiving several drugs may feel some discomfort. In order to reach at a medication with less or no side-effects, experiments concerning drugs and their doses need to be tested out. It could be rather fatal to do this in a randomized manner and not evaluate the result after each run. In other cases, knowing that the disturbances are low and the cost of experimentation is high, it is desireable with as few runs as possible. There is in these and other situations a need for collecting information in a different way than the traditional one and also for a systematic way to perform such experimentation. A route to experimentatiom is put forward where it is first tried to decide which factors are active, and thereafter estimate their effects. In the context of communication or transmitting of messages, the entropy is used as a measure of the expected information, Shannon (1948). In analoque with that the entropy of a run given the previous ones, will be the guideline for designing the next experimental run.



42. Statistical Process Control with application on Churn Management
Author: Magnus Pettersson (Statistikkonsulterna AB)
Keywords: Statistical process control, Churn management, SPC, Quality control, Early warning, Change point detection, Control chart
Format: presentation (Business and economics)
Contact: magnus.pettersson@statistikkonsulterna.se

The event that a customer replaces one provider of a service or merchandise for another is called a churn. Especially in competitive business environments, such as telecommunications, insurance, banking, hotels and mail order, customers can easily leave one company – and they really do. Since the costs for recruiting new customers is higher than the costs of retaining, it follows that it will be crucial for companies in these trades to monitor their customer population in order to keep churn rates low. Statistical process control (SPC) methods are developed to cover the needs of monitoring industrial processes and intensive care patients. They are based on procedures where data is analysed automatically and on-line. When results indicate that the process is out of control an alarm is called to alert an engineer or physician, who can take corrective actions in order to get the process back in control. Several examples of succesful applications beyond the original ones can be given. This paper discusses the use of SPC methods as a mean to enhance the precision in detecting increasing churn rates. We show that SPC methods can give market analysts a powerful tool for tracking customer movements and churn. An Early warning system (EWS), based on the same ideas used in process industries, will give the foresightful company a longer time to react against churn and hence an advantage compared to their competitors. In the examples discussed in this paper we monitor usage in order to detect decreasing volumes that indicate churn and complaint records in order to detect quality problems leading to churn, respectively. The data can be extracted from internal databases, and analysed and reported on line.



43. New moves in industrial statistics
Author: John Logsdon (Quantex Research Ltd)
Keywords: Industry Internet Clustering Collaboration
Format: presentation (Statistical consulting)
Contact: j.logsdon@quantex-research.com

Statistics should have a powerful – and ever more important – rôle to play as technology becomes more complex but statisticians are not welcomed with open arms, industry is not queuing up to ask for help and does not adopt the full power of statistical techniques. While other approaches gain in popularity, should we sit idly by and watch with amusement as the river of progress casts our discipline onto the banks? Should we oppose such movements or try to be inclusive? We investigate the reasons for this communications failure and demonstrate the point by considering various economies and industrial sectors. We must answer at least the following questions:  Why should I use a statistician?  Where can I find a statistician?  What do I expect from a statistician?  How can we work together?  What tools will we need?  What professional standards are required?  What can s/he do for my business? We detail an inclusive approach to bring together the resources needed – over the internet as necessary – so that industry can use the right tool for the job. Only when potential clients are persuaded of the case – and the bill and delivery is acceptable – will we as statisticians gain a substantial entry into this most fruitful area of work. The approach will enable all members of ENBIS to become even more involved. While it is aimed most particularly at the Statistical Consulting group, it has applications throughout industrial and other branches of statistics.



44. Six Sigma Perspectives on Stochastics for the Quality Movement
Authors: John Shade () and Kenett Ron and Ramahloto
Keywords: Six Sigma, Consulting Patterns, SQM Library
Format: presentation (Statistical consulting)
Contact: jshade@theaitgroup.com

Large gains in business performance are possible using various methodologies that have been developed over the last 100 years or so. Methodology leaders in business and industry, such as Master Black Belts and Black Belts in six sigma programs, should benefit from a clear and comprehensive assessment of best prac-tices in order to better design improvement efforts and handle unexpected barriers to progress in six sigma projects. In this paper we expand on the idea of the Stochastics for the Quality Movement (SQM) Library to handle such issues. The concept of “statistical consulting pattern” presented in Kenett, Ramalhoto and Shade (2003) is expanded, with specific implications to six sigma.



72. Tools for analysing designed multiresponse experiments
Authors: Øyvind Langsrud (MATFORSK) and Kjetil Jørgensen (TINE BA ) ; Janne Haugdal (TINE BA);
Keywords: Experimental design; Multivariate analysis; Principal components; Unbalanced factorial design; MANOVA
Format: presentation (Design of experiments)
Contact: oyvind.langsrud@matforsk.no

In industrial and scientific experimentation multiple and highly correlated responses are common (multi-channel instruments). In such cases it is useful to combine design-of-experiments methodology and multivariate techniques. In this area new methodology has been developed at MATFORSK and a related windows program has been made available (www.matforsk.no/ola/program.htm). This presentation aims to illustrate an approach for analysing multiresponse experiments and the new software will be demonstrated. The design variables can be continuous and/or categorical and the design needs not be perfectly balanced. In the univariate special case some modifications of the standard general-linear-models analysis are recommended. Proper Type II Sums of Squares (SS) are more efficient and reliable than Type III SS. Instead of reporting the SS’s directly they are divided by the total SS to obtain a sort of explained variance measure. Instead of focusing on parameter estimates, the significant effect will be illustrated by different (adjusted) mean values and predictions at various level combinations. In the multivariate case the explained variance measure is based on the univariate SS’s summed over all responses. Significance tests are performed using the new tools, 50-50 MANOVA and rotation testing (adjusted p-values). Multivariate mean values and predictions can be illustrated in a principal component score plot or directly as curves. In this talk, the demonstration and discussion will be based on examples from production of cheese and fishmeat loaves with multiple responses from spectroscopy, rheology and sensory evaluation. Analysis of data generated by the process simulation game from Greenfield Research will also be included.



46. Variation Mode and Effect Analysis
Authors: Alexander Chakhunashvili (Chalmers University of Technology) and Per Johansson (Volvo/Chalmers University of Technology), Bo Bergman (Chalmers University of Technology)
Keywords: Sources of variation, key product characteristics, FMEA, robust design
Format: presentation (Reliability and safety)
Contact: alecha@mot.chalmers.se

In industry, unwanted variation is a serious problem. This was realized already by Shewhart in early 1930s, but is still a reality, as reflected in savings made through numerous variation reduction initiatives often run under the heading of Six Sigma. Although traditionally the major focus of these initiatives has been on variation reduction in manufacturing, in recent years, a growing interest for managing variation in the early phases of product development has been observed. This growing interest is also indicated from the surveys conducted in Sweden and in the U.S. In these surveys it is also clear that only a few systematic techniques, such as P-Diagram, orthogonal arrays, signal to noise ratios, and key characteristic flowdown are used in industry. The limited use of these techniques indicates that some elements for successful application are missing. At the same time, Failure Mode and Effect Analysis (FMEA) has been established in industry as a useful method for identifying possible failure modes and assessing their effects. However, even the FMEA has shown its limitations. Namely, the FMEA procedure is discrete in nature and it takes into account only one source of variation, i.e. the inner variation caused by a failure of a part or function of the product. However, over a product’s lifecycle, its characteristics may be exposed to numerous sources of variation, including environmental factors, deterioration, and manufacturing variation. Thus, from a robustness and reliability perspective, a systematic method managing a wide range of sources of variation is needed. In this paper, we introduce an engineering method, Variation Mode and Effect Analysis (VMEA), developed to systematically identify sources of variation and assess their effects on key product characteristics (KPCs). While FMEA is a failure-oriented approach, VMEA places a stronger emphasis on variability. Conducted on a systematic basis, the goal of VMEA is to identify and prioritize those sources of variation that can significantly contribute to the variability of KPCs and might yield unwanted consequences with respect to safety, compliance with governmental regulations and functional requirements. As a result of the analysis, a Variation Risk Priority Number (VRPN) is calculated, quantifying the effect of sources of variation on KPCs and indicating the order in which variation-managing actions must be carried out. The VRPN directs the attention to the areas where excessive variation might be detrimental. The presented method is complemented with an illustrative example from Volvo Powertrain.



47. A Design Fuzzy V-Mask Model and Application- A Case of Price Wave Detecton of Funds
Authors: Jaw-Sin Su (Chinese Culture University) and Chang-Chin Wu
Keywords: Cusum Chart, V-mask, Membership Function, Open-End Funds
Format: presentation (Process modelling and control)
Contact: jawsin@faculty.pccu.edu.tw

To develop the V-mask Cusum chart introduces to get the type II of Membership function of fuzzy sets. It is call a fuzzy V-mask model. when the distance and the angle are given, then the grade of mambership function and the elements can get. To prove the fuzzy V-mask model in practical field using. we choice the price wave of foreign securitied of open-end funds in Taiwan as example. Type II of membership function is the fundamental case used to detect the price wave motion. If raising price is apersistent change, then the type s is used to detect the price wave motion. if falling price happen, then type Z be used. The conclusion wre the fuzzy V-mask model can be used to detect the price wave of open-end funds at the current time and can show the grade of the previous time to the current time. We can extend the model to others practical field.l



49. Industry academia interaction - examples and ways of improvement
Authors: Maria Fernanda Ramalhoto (Instituto Superior Tecnico) and C. McCollins (Nottingham Trent University), O. Evandt (ImPRO Oslo), R. Göb (Universität Würzburg), A. Pievatolo (CNR-IAMI), D. Stewardson (University of Newcastle)
Keywords: Industry academia interaction, statistical consulting
Format: presentation (Statistical consulting)
Contact: framalhoto@math.ist.utl.pt

Interaction of industry and academia is a prequisite for the success of the European quality movement. Stimulating, structuring and managing such interaction is a basic objective of ENBIS and the related EU Thematic Network Pro-ENBIS. The "industrial visit" is a key concept of these activities. A structured scheme for industrial visits was introduced by the authors in an earlier paper in 2003. In the present paper, the scheme is illustrated by case examples. Ways of achieving continuous improvement are considered.



50. The implementation of Six Sigma in a hospital
Authors: Ronald Does (IBIS UvA) and Jaap van den Heuvel (Red Cross Hospital)
Keywords: Six Sigma, Health Care, Green Belts, Champion
Format: presentation (Six Sigma and quality improvement)
Contact: rjmmdoes@science.uva.nl

The Red Cross Hospital in Beverwijk, the Netherlands, is a 384-bed hospital that offers a wide array of patient services. It also has a free-standing burn injuries center. The Medical Centre has approximately 900 employees. The Red Cross Hospital was the first hospital in the Netherlands with a quality system based on ISO 9000. The hospital began implementing Six Sigma methodology in 2002. The process began with an Executive Training for management and a Green Belt (GB) training for 15 middle managers and other staff. Seven GB projects were initially started in the areas of accounts receivable days, logistics, invoicing, medication management, temporary workers, and, stay in hospital. In February 2003 the end review was done and the savings were three times higher than was targetted. The second wave of GB training is underway and the third one has scheduled for the second half of 2003. In this presentation we will discuss the way the Red Cross Hospital implements Six Sigma and the lessons learned.



51. Research Projects in Industrial Statistics at IBIS UvA
Authors: Ronald Does (IBIS UvA) and Jeroen de Mast (IBIS UvA)
Keywords: statistical consultancy, research
Format: workshop (Statistical consulting)
Contact: rjmmdoes@science.uva.nl

The research work in industrial statistics at the University of Amsterdam is co-ordinated by the Institute for Business and Industrial Statistics (IBIS UvA). This institute combines scientific research with consultancy activities. Role of research at IBIS UvA IBIS UvA aims to make valuable contributions to the scientific development of industrial statistics. On the one hand, the staff members of the institute spend about 40% of their time doing research. On the other hand IBIS UvA supervises and sponsors each year several promotional researches. IBIS UvA has set itself the target to arrive at several publications per year. The background of the research activities is that the combination of scientific research and its application in consultancy activities results in interesting interactions. Many topics for research have been inspired by problems encountered in practice, whereas clients in industry have immediate benefits from the results of research. Developments of research in the last two years In the last decade attention in quality control - the application area of industrial statistics - has shifted from statistical process control (SPC) to the Six Sigma programme, a methodology for conducting improvement projects based on statistical investigation. As a result, the research done at IBIS UvA - which used to be focused on SPC and control charts - has evolved in the direction of topics which play a role in the Six Sigma methodology. A second development has been that IBIS UvA has invested in establishing itself in a central position in an international network for industrial statistics. This has lead to several interesting collaborations. IBIS UvA has had a leading role in the establishment of the European Network for Business and Industrial Statistics (ENBIS; see www.enbis.org) and participates in Pro-ENBIS, a EU funded thematic network. Topics of research: Control charts and SPC; Measurement system analysis; Quality improvement strategies; Statistical tools for exploratory studies. Other: IBIS UvA considers publications in the popular scientific literature important and aims to have several articles published each year. The institute is pleased that it can welcome prof. Søren Bisgaard for several weeks per year to co-operate in his research.



52. A PRACTICAL APPROACH TO MULTI-LEVEL ANALYSIS WITH A SPARSE BINARY OUTCOME
Authors: Jayne Fountain (University of Leeds) and Jayne Fountain, University of Leeds, UK
Keywords: Multi-level modelling, sparse binary outcome, random, mixed effects
Format: presentation (Statistical modelling)
Contact: medjmf@leeds.ac.uk

A phase III randomised clinical trial was undertaken to assess the relative value of laparoscopically assisted hysterectomy compared to conventional methods ie abdominal and vaginal hysterectomy. 1,380 patients were recruited into the study (876 in the abdominal trial, 504 in the vaginal trial) from 43 surgeons recruiting between 1 and 115 patients each. The primary endpoint of the study was the percentage of patients having at least one major complication. Only 9.6% of patients had a major complication. The main analysis compared the difference between procedures within each trial using a Pearson’s chi-squared test. There was a highly significant difference within the abdominal trial but not within the vaginal trial. Since patients are nested within ‘surgeon’ further mixed modelling was undertaken to investigate a potential surgeon effect in the comparison of procedures taking into account patient level covariates. The approach to this analysis will be presented, including the model building process using both SAS PROC GENMOD and PROC NLMIXED and how the problems with a sparse binary outcome were overcome. The analysis did not identify a significant surgeon effect.



53. An Application of Spatial Regression Models to the Surface Grinding Process
Author: Nadine Henkenjohann (Universität Dortmund)
Keywords: spatial regression model, process optimization, complex relationships
Format: presentation (Design of experiments)
Contact: henkenjo@statistik.uni-dortmund.de

The peripheral longitudinal surface grinding process with conventional corundum grinding wheels is often used to manufacture high precision surfaces. Thus, it is of interest to optimize this process with respect to the surface quality, which is measured by the surface roughness. Although the classical optimization approach facilitates the use of second-order polynomials, experience in other metal cutting processes indicate that it may be more suitable to choose models that allow for more complex relationships. One such class of models is given by the spatial regression models (O'Connell and Wolfinger, 1997). Unlike second-order polynomials, the spatial regression models (SRMs) are capable of approximating curved ridges. These models provide smooth, data-faithful approximations of the unknown response surface and permit realistic predictions of the response in the entire design space. The model considers sets of k explanatory variables as k dimensional data points, which are assumed to depend on each other due to their spatial distance. Thus, spatial heterogeneity is not only modeled by the large-scale dependence (or trend) but also by the correlation structure of the k dimensional data points. In the case of the second-order approach, the central composite design provides a good distribution of sampling points and possesses several desirable properties. However, when fitting SRMs, space-filling designs like the maximin distance design are more suitable. In an experiment that was planned to optimize the grinding process both designs, the central composite design and the maximin design, are conducted. Subsequently, the second-order polynomial and the SRM are fitted and compared with respect to their prediction. The results show that predictions from the SRM are better than those from the second-order polynomial. Therefore, for our experiment the application of SRMs is preferable to the classical approach.



54. Hotelling´s T2 Chart based on Robust Estimators of Parameters
Authors: Sergio Yañez (Universidad Nacional de Colombia-Medellín) and Nelfi González, Alberto Vargas
Keywords: Multivariate Process Control, Hotelling´s T2 control chart, robust estimator, outliers, masking
Format: presentation (Process modelling and control)
Contact: syc@geo.net.co

HOTELLING'S T2 CONTROL CHART BASED ON ROBUST ESTIMATORS OF PARAMETERS Sergio Yánez Canal Profesor Asociado, Universidad Nacional de Colombia Sede Medellín E-mail: syc@geo.net.co José Alberto Vargas Navas Profesor Asociado, Universidad Nacional de Colombia Sede Medellín E-mail: avargas@matematicas.unal.edu.co Nelfi Gertrudis González Alvarez Instructor Asociado, Universidad Nacional de Colombia Sede Medellín E-mail: ngonzale@perseus.unalmed.edu.co Abstract In Phase I, Stage 1 of a multivariate process control, the process parameters and control limits estimation of a Hotelling's T2 control chart with n individual observations, can be severely affected by the presence of outliers. We propose procedures to construct robust estimators of the mean vector and covariance matrix based upon the MVE (Minimum Volume Ellipsoid) and the biweight S estimators, for the case p=2 (bivariate process). We run simulations with different levels of contamination in both, the mean vector and the variance covariance matrix. Our results show the good performance of these estimators under the presence of several outliers, in contrast to the usual estimates of the mean vector and the covariance matrix which suffer from the masking effect. Key Words: Multivariate process control, Hotelling's T2 control chart, robust estimator, outliers, masking.



56. Application of Six Sigma Methodology in Gamesa Aeronáutica
Authors: Lourdes Pozueta (Universitat Politècnica de Catalunya) and José M. Lario (Gamesa Aeronáutica); Lourdes Pozueta (Univesritat Politècnica de Catalunya)
Keywords: Six Sigma, design of experiments, quality improvement, wings, quality in rivets
Format: presentation (Six Sigma and quality improvement)
Contact: lourdes.pozueta@upc.es

Gamesa Aeronáutica is a company that made wings for airplanes . Two people from this company have participate in a Six Sigma Program for Black Belts offered in 2002 jointly by the Universitat Politècnica de Catalunya and the Universidad de Navarra at the campus of San Sebastián. One of the requisites for being BB is to present at the end of the 100 hours course, a successful project involving the DMAIC methodology. People attending the course are assigned a tutor from the University in case they need help in order to apply the methodology or the statistics tools. We would like to present here the main facts of one of the projects. One of the most common operation in the plant is to rivet since a wing has near 7500 rivets. These rivets join parts and the fluids can not strain. A bad quality in rivets affect intern and extern clients and cause an important cost to the company. During 4-5 months a BB has lead the project and obtained important results. The main statistical tool used in the project has been Design of Experiments. We have run a 24 design with to responses: number of rivets with looseness and number of high aerodynamic rivets. As a consequence we have change the work conditions in the shop and reduce drastically the number de non conformities, hours of repair, ….etc. We started with a 3,3 sigma level and obtained a 5 sigma level, saving 95.000 euros in less that 6 months.



104. Time Series Model of the Number of Claims in Korean Automobile Insurance
Authors: Kang Sup Lee (Dankook University) and Young Ja Kim (Dankook University)
Keywords: ARIMA(p,d,q)x(P,D,Q)s, SACF, SPACF, number of claims, automobile insurance
Format: presentation (Statistical modelling)
Contact: leeks@dankook.ac.kr

In this article, a model of the number of repoted claims in Korean automobile insurance is studied. Using the Box-Jenkins method, we found that ARI(3,1,0)x(0,1,0)12 is an appropriate for the number of reported claims. And, truncated normal distribution is applied to solve the problem of appearing the negative number of claims for IBNR.



58. A MULTIFACTOR EXPERIMENT ON A MANUFACTURING PROCESS
Authors: Grazia Vicario (Mathematics Department - Politecnico di Torino) and Pier Marco TESTA, Raffaello LEVI (Politecnico of Turin)
Keywords: Fractional Factorial Design, Interactions, Range, Pattern
Format: presentation (Design of experiments)
Contact: grazia.vicario@polito.it

An experimental investigation was performed on a production plant of a tier one supplier to motor car industry, manufacturing subassemblies ready for final assembly line. Metal and glass components are bonded for life with a proprietary process; bond's quality is assessed in terms of resistance to creep under static loading at elevated temperature, and strength at room temperature, taking aging effect into account. Specimens taken from production line are routinely tested to destruction, to demonstrate product properties exceeding given specification thresholds, as required for batch acceptance. Production process being stable and altogether satisfactory, some additional information was deemed useful on single and combined effects of several parameters, singled out from a comprehensive list initially proposed. Manufacturer's requests and some specific constraints led to replicating a fractional factorial design, the drawbacks of confounding being more than offset by specific information on scatter which couldn't be obtained without replication. Testpieces were manufactured on production line and tested on standard testing facilities, close relevance of results to actual production being sought after. Strong interactions, often exceeding effects of single factors, were observed on both mean and range of main response, scatter pattern exhibiting connection with failure modes. Some mechanisms affecting product properties were clarified in the light of experimental results, and process parameter combinations were identified catering for a cycle time substantially shorter than current settings, yet consistent with full capability of meeting product specifications.



59. Bivariate negative binomial models for environmental count data with constantly correlated covariance structure
Authors: Masakazu Iwasaki (The University of Tsukuba, Graduate School of Business Sciences) and Hiroe Tsubaki (The University of Tsukuba, Graduate School of Business Sciences)
Keywords: Bivariate Negative Binomial Models, Bivariate Generalized Linear Models
Format: presentation (Statistical modelling)
Contact: iwskmail@sea.plala.or.jp

We propose a new bivariate negative binomial model with constant correlation structure which was derived from a contagious bivariate distribution of two independent Poisson mass functions by mixing our proposed bivariate gamma type density with constantly correlated covariance structure (Iwasaki and Tsubaki, 2001) which satisfies the integrability condition of McCullagh and Nelder (1989, p.334). The underlying bivariate gamma type density comes from the natural exponential family (Iwasaki and Tsubaki, 2002). On this meeting we argue our bivariate discrete distribution in terms of the framework of bivariate generalized linear models and illustrate an environmental application to clarify our idea. We use the yearly counted time series data of the occurrence of an abnormally high level of the oxidant concentration in Tokyo and the adjacent Chiba area. We fit our proposed bivariate negative binomial generalized linear models to these data. We firstly estimate the model parameters by fitting the univariate GLM with negative binomial distribution separately ignoring the correlation structure between the two adjacent areas. Secondly we apply our bivariate negative binomial models to the data simultaneously and compare the estimated parameters and other statistics. Our tentative examination suggests that the standard errors of the estimated parameters of the bivariate negative binomial models are smaller than the standard errors of the separately estimated univariate models. This result means that the consideration of the correlation structure to estimate model parameters makes it possible more sensibly to reflex the area specific characteristics to the models. Therefore our proposed model will give the insight of the seemly new statistical procedure.



60. A Semi-Bayesian Method for Shewhart Individuals Control Charts
Authors: Thijs Vermaat (IBIS UvA) and Ronald Does (IBIS UvA)
Keywords: Bernstein approximation, control charts, non-parametrics, statistical process control, semi-Bayesian
Format: presentation (Process modelling and control)
Contact: tvermaat@science.uva.nl

Statistical Process Control (SPC) is used to control the quality of processes. The basic tool of SPC is the control chart. The purpose of a control chart is to detect assignable causes of variation. The classical method to estimate the control limits of the control chart for individual observations is based on the average of the moving ranges. This method assumes that the data are normally distributed. In practice the outcomes of a process often deviate from normality. In this presentation a semi-Bayesian density estimation method is used to establish the control limits of a Shewhart individuals control chart. The method makes use of an initial guess of the distribution of the characteristic under study. Based on this initial guess and the observed data a density function f can be derived. The performance of the control chart is studied with use of simulation. The results of the new method are compared with the control chart based on the average of the moving ranges.



103. Utilisation of methods and tools for quality improvement in Polish Industry
Authors: Adam Jednorog (Center for Advanced Manufacturing Technologies (CAMT)) and Adam Jednoróg, Kamil Torczewski, Monika Olejnik, Shirley Coleman
Keywords: quality improvement, survey
Format: presentation (Six Sigma and quality improvement)
Contact: A.Jednorog@camt.pl

Many Polish organizations are on their way to improve quality of their products and processes. Mostly their efforts are limited to implementation of quality management systems according to ISO 9000 standards. The implementation of something above this standard level is not very common. This paper presents the results of survey conducted among Polish companies. The degree of knowledge and degree of the use of quality improvement tools and techniques depending on organization size, branch, ownership and the share of foreign capital was investigated. The influence of these methods, including statistical ones, on decisions, actions and achieved results connected with quality were studied. Also the ways in which employees get to know about these tools and techniques along with their perception about the usefulness of them were investigated. Finally the readiness and willingness of Polish organizations to implementation of Six Sigma strategy were estimated.



61. Statistical models for operational risk management
Authors: paolo giudici (university of pavia) and Gabriele Stinco, Monte dei Paschi di Siena
Keywords: Bayesian Networks, Operational risk measurement, Value at Risk
Format: presentation (Data mining)
Contact: giudici@unipv.it

The exposure of banks and financial institutions to operational risk is increased in the recent years. The Basel Committee on Banking Supervision has established a capital charge to cover operational risks, different from credit and market risk. According to the advanced methods defined in “The New Basel Capital Accord” to quantify the capital charge, in the talk we shall present a causal Advanced Measurement Approach based on a Bayesian network that estimate an internal measure of risk of the bank. One of the principal problem to face is to measure the operational risk is the scarcity of loss data. The methodology we have applied solves this critical point because it allows a coherent integration of different sources of information such as internal and external data and opinion of experts (process owners) about the frequency and the severity of each loss event. The model corrects the losses distribution considering the eventual relations between different nodes of the network that represent the losses of each combination of business line/event type/bank/process and the effectiveness of the corresponding internal and external controls. The operational risk capital charge is quantified by multiplying the VaR per event, a percentile of the losses distribution determined with an estimate of the number of losses event. The methodology we shall present in this document is the result of a close cooperation beetwen the UNiversity of Pavia and one of the most important Italian banking group, Monte Paschi of Siena (MPS).



101. Working as a statistical Consultant - A Praxis Report
Authors: Anja Schleppe (Autoliv GmbH) and Andrea Ahlemeyer-Stubbe
Keywords: Statistical Consultant, Project Management
Format: presentation (Statistical consulting)
Contact: anja.schleppe@autoliv.com

Working as a statistical Consultant - A Praxis Report. Andrea Ahlemeyer-Stubbe, Anja Schleppe Statistical consultants usually work in 2 different roles, as an internal or external consultant. The talk describes the main difference between both situations. We would like to point out that working as a statistical consultant is much more than just doing data preparations and data analyses. Ideally, statistical consultants are involved in all business processes connected to their work. We show and explain the hard- and softskills that are needed to be successful as a statistical consultant. Additionally, we explain how project management is related to the consulting work and how it changes the job details. This talk should give statisticians a guideline in their everyday consulting work, and it helps them to figure out which level of skills they already have reached and what their next steps might be to go for. We would like to use a lot of practical examples based on our more than 10 years experience in consulting in several industries.



63. A Design Messages Detection System and Application
Authors: Jaw-Sin (Hurdy) Su (Chinese Culture University) and Yi-Kuang Lin, Chi Yuan Chuang
Keywords: Messages Detection System (MDS), Visual Community Station (VCS), Hash function, Keyless-in-context
Format: presentation (Web mining and analysis)
Contact: s250139@ms12.hinet.net

To construct a messages detection system (MDS) to keep the true and delete the fruad for the visual community station (VSC) using. The MDS includes three steps. First key words research technique is used to delete have not related to the topic. Second message authentication code of hash function is used to discriminate that the message is original data. Third the keyless-in-context is used to combine betwen more then one messages are from same sender. After sort out the messages by MDS, then the messages can keep true and have not fraud. To prove the practical field using. the different kinds of messages as examples be shown. The conclusions are the MDS is an efficient messages sorting system and after sort out the messages by DMS then the messages can be more usefull to the user.



64. Statistical evaluation of the the density of a compressed polyurethane item
Author: Antonio Pievatolo (IMATI-CNR)
Keywords: litigation, compression delay, experimental plan
Format: presentation (Statistical consulting)
Contact: marco@mi.imati.cnr.it

A consultancy was given to the suing part in a patent litigation concerning a machine for the production of compressed polyurethane items. The European patent protects the production process of delaying the compression of the liquid polyurethane mixture after it has filled the appropriate cavity. An experimental plan with different delays and an appropriate number of replicates was set up; the results were analysed using an analysis-of-variance model with fixed effects.



65. An experiment to compare the combined array and the product array for robust parameter design
Authors: Joachim Kunert (Universitaet Dortmund) and C. Auer (Department of Statistics, University of Dortmund), M. Erdbrügge (Department of Statistics, University of Dortmund), R. Göbel (Chair of Forming Technology (LFU), University of Dortmund)
Keywords: Robust parameter design; noise factor; design factor; signal to noise ratio; product array; combined array; interaction plot
Format: presentation (Design of experiments)
Contact: joachim.kunert@udo.edu

In robust parameter design, it is widely advocated to use the combined array approach instead of Taguchi's product array for experiments with noise factors. Even when a product array is carried out, statisticians recommend to replace the traditional analysis that uses signal to noise ratios by an interaction analysis. The arguments are convincing. However, they are based on theoretical considerations, or the reanalysis of past experiments. We did not find published examples where the predictions from both approaches are checked by confirmation experiments. The present paper reports the data from an experiment, where we have used both kinds of designs simultaneously. The experiment was done as part of a project dealing with statistical design to optimise the process of sheet metal spinning. As expected, we found that the results of the combined array allowed clearer identification of location factors than the product array. However, the product array allowed to find a control factor, which has an influence on the variance of the response. There were no indications of this effect in the combined array. A confirmation experiment supports the findings that there is a control factor. It seems that the effect on the variance is due to three-factor interactions that were not considered in the combined array. In the authors' opinion this shows that the traditional approach with product arrays is more robust to imprecise model assumptions.



66. Observational data and optimal experimental design discriminating between more than two models: the definition of weights
Author: Rossella Berni (Department of statistics- University of Florence)
Keywords: T-optimal design; observational data
Format: presentation (Design of experiments)
Contact: berni@ds.unifi.it

The paper aims at improving the procedure, showed in a previous work, pointing out the relevance of the use of large data sets, collected in industry, to build an experimental design, mainly directed to improve the quality of the product or of the production process. The starting point is the use of observational data to implement an experimental design without additional runs, for an efficient use of these data, exploiting theory about optimal designs. Particularly, we suggested a multi-step procedure in which each step points towards optimality of the selected trials, where the final set of trials constitutes an optimal experimental design built through sequential plans. In this work we suggest a further improvement of this topic discriminating between more than two models. In this case the suggested algorithm must be revised in order to discriminate between several equally close models assigning a weight to each one. The problem is the specification of weights, taking care of the observations available at each iteration or considering only weights defined a priori. An empirical example on real data is shown.



67. Linking the network economic: A case study of how manufacturing SME¡¦s growth by linking in the global network
Author: Jiahui Peng ()
Keywords: SME. linking, global network
Format: presentation (Business and economics)
Contact: jhpeng@mail.cyut.edu.tw

Small & medium Enterprises (SMEs) have become significant forces driving economy growth in most of countries. It has played major roles on the global economics as well. However, very few researches study on the mystery of how the SMEs grow by linking to the global networks. After decades of researches, none of theoretical construction or model of network architecture is still poorly understandings in terms of the network inter-relationship, inter-action and adjustments and adoptions in strategies, system , organization and human behavior. This case study describes how SMEs link to global networks by three-dimensions of model, Micro, Macro and intra-organization. The micro level provides some clues in understanding how the network constructed and operated while macro defined as local and depend network then illustrate the new ways of growth and sustain SMEs. Finally a series management adjustment of interaction skill, IT implementation, organizational adjustment and behavior adaptation are draw for the new economy. However, the new models are evolving and firms switching between networks are emerging. The challenges for management are endless. This paper also concludes that further empirical researches are required to test models and management paradigms.



68. The Two-Stage Shrinkage Estimation of Entropy of a Normal Distribution
Authors: MAKARAND RATNAPARKHI (WRIGHT STATE UNIVERSITY) and Vasant B. Waikar, Miami University, Oxford, Ohio, USA
Keywords: Entropy, estimation,two-stage shrinkage,bootstrap,efficiency,SPC,SQC
Format: presentation (Process modelling and control)
Contact: makarand.ratnaparkhi@wright.edu

The normal distribution and its variance are used in many quality control and process control applications. In this paper, we suggest the use of entropy instead of variance for such applications. For the estimation of entropy, we consider the two-stage bootstrap shrinkage procedure described below. At the first stage of estimation, the hypothesis regarding the variance is tested. If the hypothesis is accepted, the shrinkage estimator of the variance based on the value of variance under the null hypothesis is defined with shrinkage factor, k say. The factor k is a function of the test statistic and level of significance. For the refinement (smoothing) of the shrinkage factor k, we use bootstrap sampling for generating a series of values of k. The final value of k, denoted by k*, is a function of these values of k, for example, the average of the values of k. If the hypothesis is rejected, we proceed to the second stage. At this stage, a second sample is selected and pooled estimator of variance based on these two samples is obtained. Then, we define the final combined estimator of variance. Such estimator is shown to be more efficient than the usual estimators using simulations. Now, we express the estimator of entropy using this two-stage shrinkage estimator and present related applications in SPC and SQC.



69. Process Optimization in Sheet Metal Spinning - Efficient Application of Statistical Methods in Low Volume Production
Author: Roland Göbel (University of Dortmund, Chair of Forming Technology)
Keywords: Design of Experiments, Efficient Application, Metal Spinning
Format: presentation (Design of experiments)
Contact: goebel@lfu.mb.uni-dortmund.de

Methods of statistical design of experiments have successfully been used in many industrial applications. However, for highly flexible but very complex manufacturing processes in low volume production the efficiency is limited. One example for such a process is metal spinning. Due to the frequently changing conditions and very small stable regions in the parameter space a successful application of statistical methods is very difficult. Problems arise as early as in the phase in which proper factor levels are to be defined and manifest themselves in an intolerable high number of preliminary experiments. Because of this, up to now ‘one-factor-at-a-time’-experiments and intuitive engineering methods are used. In this contribution an efficient application of statistical design of experiments in sheet metal spinning is presented and discussed. Within this approach first of all the integration of existing knowledge beyond the use of well known methods like ‘flow-charts-’ or ‘cause-and-effect-diagrams’ has been tackeled. Only the combination of general, workpiece unspecific knowledge, knowledge of similar geometries and a basic physical description of the process proved to be successful. The experiments which aimed out the optimization of the workpiece properties were, further to the diffilcuties adressed above, characterized by the existence of categorical responses and the indispensable need to perform a multivariate optimization. For this, methods like proportional-odds models or the new developed joint optimization plots have been used. The results of this statistical approach have been compared with the traditional optimization strategy. This comparison showed, that with the conventional approach it has been possible to optimize some of the major quality characteristics, while at the same time other characteristics have been negatively influenced. In contrast to this, with the statistical based approch all characteristics could be improoved simultaneously. Beyond that, the number of experiments could be significantly reduced.



70. A Quantile Estimation with Local Smoothing for Obtaining Critical Values
Authors: Seiichi YASUI (Tokyo University of Science) and Yoshikazu OJIMA ( Tokyo University of Science ); Tomomichi SUZIKI ( Tokyo University of Science );
Keywords: Quantile Estimation,Local Smoothing,Emprical Distribution
Format: presentation (Design of experiments)
Contact: j7403702@ed.noda.tus.ac.jp

To introduce a new hypothetical testing procedure, it is required to obtain quantiles of the test statistic for a table of critical values. When the distribution of the test statistic is highly complex, the critical values are usually obtained from the empirical distribution using the Monte Carlo method. However, the precise critical values are required in the case when the hypothetical testings are executed sequentially. Because errors in determination of the critical values for early stages affect the critical values of the later stages. The calculation of such precise critical values needs in some of the analysis methods for experiments of orthogonal array ( fractional factorial experiments ). This paper proposes an improvement of estimating quantiles by locally smoothing and interpolating in the neighborhood of the quantile of the empirical distribution. The smoothers are the linear, the quadratic polynomial and natural spline. Then we determine the fitted curve by AIC, and estimate the interested quantile thought its curve. The accuracy of estimates of quantiles by proposed method is evaluated for some known distributions. The proposed method is then applied to obtain the critical values of the existing procedure. Therefore the accuracy of the calculated quantiles are higher.



71. Allocation and Estimation of Effects Using Hadamard Matrices
Authors: Tomomichi Suzuki (Tokyo University of Science) and Seiichi Yasui (Tokyo University of Science), Yoshikazu Ojima (Tokyo University of Science)
Keywords: design of experiments, orthogonal array, Hadamard matrix, interaction
Format: presentation (Design of experiments)
Contact: suzuki@ia.noda.tus.ac.jp

Fractional factorial design applying orthogonal arrays is an effective tool in design of experiments. For the L16 orthogonal array, possible allocations of main effects and their interactions have been extensively examined and widely reported. Hadamard matrices are generalized version of two-level orthogonal arrays. There exist five Hadamard matrices for size of 16 by 16. One of the five matrices is mathematically equivalent to L16 orthogonal array. We investigated four remaining Hadamard matrices mainly concerning how the interactions of two particular columns appear. As a result, we have clarified all the relation of interactions between any two columns for all the Hadamard matrices of order 16. We also found out that there are allocations which are impossible with the L16 orthogonal array.



73. Analysis of covariance in the market of tenerife
Authors: YENIS MARISEL GONZÀLEZ MORA (UNIVERSITY OF LA LAGUNA) and María Mercedes Suárez Rancel
Keywords: Outliers, Masking and Swamping, Influence
Format: presentation (Design of experiments)
Contact: ygonmor@gobiernodecanarias.org

Tenerife is a tourist island, where its central economical activity is tourism. This sector has improved because of the construction industry, which has developed during the last few years. The most important problem of this activity is the fixing prices of the flats. This is the main responsibility of the sales manager of each construction industry. After a deeper study, it observed the random of this fact. In this article, we will try to construct a suitable model in order to fix the prices of the flats through a previous information. The population object is constituted by real data of the prices of apartments given by a “Building Company”.



74. Malfunction detection of an on-board diagnostic car system in presence of highly correlated data
Authors: Stefano Barone (University of Palermo, Italy - Dept. of Technology, Production and Managerial Engineering) and Paolo D'Ambrosio (University of Naples); Pasquale Erto (University of Naples)
Keywords: Statistical Process Control, Statistical Monitoring, Autocorrelation, ARMA models, Engineering Control, On Board Diagnostics, OBD
Format: presentation (Process modelling and control)
Contact: stbarone@dtpm.unipa.it

New generation car models are increasingly equipped with self-diagnostic electronic systems aimed at monitoring the health state of critical components. The monitoring activity proceeds through the analysis of diagnostic indices. The measures of such variables are frequently auto correlated then, applying traditional control charts, too many false alarms can be detected. In order to overcome this problem, a possible approach consists in using time series models that consider data autocorrelation and then apply control charts to the residuals. In this paper, the authors present the preliminary results of a research, conducted in collaboration with a car manufacturing research center, aimed at the evaluation of quality and reliability levels of an anti pollution on-board diagnostic system during its latest development phases on a new vehicle model. A purpose-designed software has been developed, enabling to filter from a huge experimental database, only the necessary data to analyze. For one of the several monitored diagnostic indices, the ARMA model fitted to data is presented together with graphical output and statistical analysis. The overall methodology and the easy to use software allow engineers to promptly detect anomalous behaviors of the diagnostic system and to possibly remove their causes before mass production of the new vehicle model starts.



75. Optimal cost control charts for shift-detection
Authors: András Zempléni (Eötvös Loránd University, Dept. of Probability and Statistics) and Belmiro Duarte (Instituto Superior de Engenharia de Coimbra, Portugal), Pedro Saraiva (Department of Chemical Engineering, University of Coimbra, Portugal)
Keywords: control charts, cost function, Markov chains, shift in process mean
Format: presentation (Statistical modelling)
Contact: zempleni@ludens.elte.hu

Control charts are one of the most widely used tools in industrial practice for achieving process control and improvement. One of the critical issues associated with the correct implementation of such a tool is related to the definition of control limits and sampling frequencies. Very often these decisions are not well supported by sound statistical or economic decision-making criteria, leading to a suboptimal use and results derived from the applications. In this talk we will expand upon work previously presented at the 2nd ENBIS conference, where we investigated the problem of one-sided random shifts for processes following a normal distribution and the case of exponential shift-size distributions. In our model for optimal control charting we did assign different costs to sampling, non detected out-of-control events and false alarms. Now we extend such a work to other shift-size distributions than the exponential and investigate the robustness of our procedures for the violation of normality assumption for the underlying process. We will present simple yet powerful methods for process monitoring derived from such an approach, where one has to face frequent changes in the process behaviour. Throughout the work we use Markov-chains for finding the optimal chart parameters. Acknowledgments This work was developed by members of the Pro-ENBIS network, which obtained financial support from the EU project GTC1-2001-43031.



76. Robust Design and Statistical Modelling
Author: Ron Bates (London School of Economics)
Keywords: Design of experiments, statistical modelling, robust design
Format: presentation (Statistical modelling)
Contact: R.A.Bates@lse.ac.uk

The EU project on Robust Engineering design is called TITOSIM: Time to market reduction via statistical information management. It builds on an earlier project (CE2: Computer Experiments for Concurrent Engineering) in a number of significant ways. The basic architecture involves three stages: experimental design, emulator (that is model) fitting and optimisation. In all three parts there are innovative additions. Originally CE2 was based on emulation of computer code using kriging methods with a Gaussian covariance kernel. This method is now quite widely used in the automotive sector in areas such as engine design, engine mapping and crash simulation. Recent methods include Bayesian methodologies and depending on the field of application there is emphasis on optimisation (engineering) or sensitivity analysis (engineering , certainly, but also fields like environmental and geophysical modelling). In TITOSIM the emphasis is on optimisation for performance indicators which include robustness criteria. There is also the facility to analyze real physical experiments. This paper concentrates on the use of emulation methods such as multivariate polynomial modelling, radial basis function methods and kriging for robust design.



77. A Two Pairs of Gauging Scores Control Charts to Monitoring the Process of Slot Machines
Authors: Isaia Ennio Davide (Dep. Statistics & Mathematics) and Alessandra Durio, Dep. or Statistics and Mathematics
Keywords: Control charts, Staistical Process Control, Markov Chains
Format: presentation (Statistical modelling)
Contact: isaia@econ.unito.it

In the field of statistical process control, Shewhart's charts are widely recognized as a simple and effective tool to judge whether a production process can be considered, at a given time and with reference to a characteristic being monitored, either in statistical control or out of control. The simplicity of the method, however, goes to the detriment of the information which can be gathered from the cumulative results of earlier observations which would make it possible to discern possible trends of sample observations approaching the control limits; in literature, we may find several alternative methods, such as, for instance, CUSUM or EWMA. This paper (e.g. Isaia,2001) offers an alternative criterion, previously suggested by Page (1962) and Munford (1980), which occupies an intermediate place between the methods mentioned above; it is essentially based on a ``go-not go'' device, a suitable scoring system and an associated cumulative control statistic. In the following we shall introduce a scheme based on two pairs of gauges that improves the sensitivity of Munford's cumulative score chart for detecting a large shift in the mean, and this in the sense that when the process is out of control, observations with higher scores will be more likely to appear and they may be detected more quickly. We applied these control charts to monitoring the montly coin-in and the earning processes of several slot-machines operating in an ufficial italian Casino.



78. Experiences in delivering a multicultural Six Sigma Black Belt training programme
Authors: Matthew Linsley (ISRU, University of Newcastle upon Tyne) and Kamil Torczewski, Adam Jednorog, Dave Stewardson, Shirley Coleman
Keywords: Six Sigma, multicultural, training, case studies
Format: presentation (Six Sigma and quality improvement)
Contact: M.J.Linsley@ncl.ac.uk

ISRU is part of the School of Mechanical and Systems Engineering, University of Newcastle upon Tyne, having been formed in 1984. The unit has a long history of providing expert statistical support to both large international organisations and regional SME's. CAMT was established in 1994 in the Institute of Production Engineering and Automation at Wroclaw University of Technology. CAMT concentrates on research, training and technology transfer, in the scope of modern production. CAMT is acknowledged as a leading research centre and a technology provider in Poland. This paper essentially describes the benefits of a multicultural Six Sigma Black Belt training programme that the partnership has delivered to a long term international manufacturing (mechanical sector) client in Wroclaw. Delegates from two plants (both East and West Europe) within the organisation have attended the training programme highlighting the diversity of the project. Following the standard Six Sigma format, the main component of the package is a four-week training programme (M, A, I, C), that is delivered to Black Belt candidates in order for them to adopt the necessary statistical skills and methodology required to reach the level of Black Belt certification. In addition, the programme includes a more comprehensive Define phase that concentrates on teamwork, people and management skills. A framework for successful individual project selection and project support is included in the six-month training programme. Selected project case studies are introduced alongside details describing the exact nature of the working relationship between the two universities and how the project was subsidised.



79. Optimal Experimental Plans for Experiences in Diesel-Fueled Motors towards to Predict Scape Emissions
Authors: José Miguel Carot Sierra (Polithecnic University of Valencia) and Hernández, L.; Martínez,M.; Miró,P. ;Jabaloyes, J.M.; Carrión,A
Keywords: Optimal Experimental Plans, Predict Scape Emissions, Diesel-Fueled Motors
Format: presentation (Design of experiments)
Contact: jcarot@eio.upv.es

The continuous reduction in the pollutant emission limits implemented by the regulation of Diesel engines for automobile use have become a challenging task for the industry. In this sense, it is necessary the development of new control strategies and the reduction of pollutants emissions without deteriorating fuel consumption, specific power or increasing the engine cost. In response to the new requeriments, Diesel engine industry has adopted new technical solutions related to the performance of engines, such as, EGR, multiple injection, variable geometry turbines, etc. In this sense, the evolution of modelling have become an essential instrument in the reduction of toxic emissions. In the development of any kind of modelling, it is very important the proccess to obtain experimental dates. But actual motors are complex sistems: on a hand, the number of variables in motors has incresed very much in the last years and some of these variables are not independent and on the other hand, there are many restrictions in the combination of them. This study pretends to apply the statistical technical of Experimental Desing under the criterium of D- Optimality for designing the experiments. The desings based on this criterium let us to work with variables of the proccess under multilinear restrictions so we can minimize the negative consequences of not orthogonality. Finally we have built surface of response surface based on quadratic models.



81. Fractionation of Two-level Designs for Multi-Step Processes (Preserving the Split-Plot Structure)
Authors: murat kulahci (Arizona State University) and Jose Ramirez (W.L. Gore and Associates); Mark Cotter (W.L. Gore and Associates); Randy Tobias (SAS Institute, Inc.)
Keywords: Fractional Factorial, Multi-Step Processes, Split-Plot
Format: presentation (Design of experiments)
Contact: kulahci@asu.edu

The great majority of processes in the chemical industry occur in several steps, which gives these Multi-Step processes an inherent split plot structure. As a direct consequence of this, designing experiments for such processes presents a challenge as the increasing number of factors, which usually means an increasing number of runs, requires excessively large designs. In this study, we present two examples where two-level factorial designs for multi-step processes, with a reasonable number of runs, are obtained via fractionation. The case study under consideration involves a three-stage process with 1, 7 and 8 two-level factors in stages one, two and three respectively, or (2^1)x(2^7)x(2^8). The full factorial requires 2^16=65536 runs, obviously a not very practical choice. We present a resolution IV design that preserves the split-split-plot structure and has only 64 runs. The experiment is replicated for a total of 128 runs; only 0.2% of the full design, with the majority of two-factor interactions estimated clearly. The experiment is used in a chemical process at one of the W.L. Gore and Associates’ plants and considerable improvements such as 5-8% increase in the process yield, 30% decrease in the product lead time, 15% decrease in the overall cost of the process have already been obtained. Currently there is no easy way to design such experiments. One way to deal with this problem is to “trick” PROC FACTEX in SAS to generate this type of split-plot fractional factorials. The new developments on obtaining fractional factorial split plot designs that are not necessarily minimum aberration but yielding desired outcomes in terms of more main effects and/or two-factor interactions being free form confounding will also be discussed.



84. The Effects of Marketing Activities on Fast Moving Consumer Good Purchases: the Case of Yoghurt Italian Market
Authors: Sergio Brasini (Department of Statistical Sciences - University of Bologna) and Marzia Freo and Giorgio Tassinari (Department of Statistical Sciences - University of Bologna)
Keywords: Brand Loyalty; Promotions; Rational Brand Choice Models; Fast Moving Consumer Goods
Format: presentation (Statistical modelling)
Contact: brasini@stat.unibo.it

A key role in the purchasing process is played by the consumer’s brand loyalty. In fact, brand loyalty is the main target which the marketing policy of the firm is driven at, in particular by means of short-term and tactical activities. To this purpose, the interaction between brand loyalty and promotional activities is considered extremely interesting too. Focusing on this aspect, we try to answer both to what extent the sales promotions effectiveness depends upon the consumer’s brand loyalty and upon her buying behaviour, and to what extent the consumer’s behavioural characteristics (purchase frequency and purchase level) affect the response to promotional activities and moderate the effect of brand loyalty during the consumer choice process. Different specifications for the utility function, exploiting information on promotional activities, displays usage, ad features in the store, discount and differently brand loyalty measures, have been estimated into a discrete choice framework, that is into the rational brand choice paradigm, paying attention to their effects on individuals’ probabilities to choose the specific brand during each purchase occasion. The application is run on a ACNielsen dataset of Italian households panel, observed to buy at least one yoghurt package during a year, matched to scanner data on quantities, prices and promotions.



87. A Case Study of Comparison of Multivariate Statistical Methods for Process Modelling
Authors: Susana Barceló (Universidad Politécnica de Valencia) and Vidal, S.; Ferrer, A. (Department of Applied Statistical and Operations Research and Quality ; Polytechnic University of Valencia (Spain)
Keywords: Process Modelling, Multivariate Transfer Function Model, Finite Impulse Response, PLS Time Series
Format: (Process modelling and control)
Contact: sbarcelo@eio.upv.es

In this research we present a case study of comparing two statistical methodologies to estimate an industrial polymerization process model. This estimated model can be used for designing model based controllers of this multivariate process. Our approach will be modelling the process using two statistical multivariate methods: the multivariate transfer function and PLS multivariate time series, from the process input-output data. In the comparison of the two methods we consider several issues as the simplicity of the process modelling ( i. e. the steps of the identification, estimation and validation of the model), the usefulness of the graphical tools, the goodness of fit and the parsimony of the estimated models. This case study involves a commercial-scale polymerization process that produces large volumes of a polymer (high-density polyethylene) of a certain grade used in many familiar consumer products. In the modelling of this process we account the four most important variables of the process that is two outputs: the key quality characteristic i.e. polymer viscosity, which is measured by melt index (MI), the productivity which is worked out by energy balance (APRE) and two inputs or process variables: temperature (T) and ethylene flow (E). We estimate and compare the two models: the parsimonious Transfer Function Model and the non-parsimonious Finite Impulse Response by PLS Time Series and discuss the advantages and disadvantages of both methodologies.



88. PVC QUALITY MONITORING THROUGH STATISTICAL REASONING
Authors: Marco Reis (University of Coimbra) and Rui Telmo (CIRES), Marco S. Reis (University of Coimbra), Pedro M. Saraiva (University of Coimbra) and Pedro Gonçalves (CIRES)
Keywords: PVC quality monotoring, SPC
Format: poster (Process modelling and control)
Contact: marco@eq.uc.pt

The industrial production of PVC in suspension is not only an important process from an economic point of view but also presents striking challenges from an engineering perspective. In fact, numerous factors can affect product quality, some of which are quite difficult to deal with, like raw materials variability, accumulation of contaminants, or safety related issues that derive from the large amounts of energy released during the polymerization reaction. Industrial practice developed over the years gathered an important knowledge base for dealing with the different sources of the variability, in order to manufacture polymer grades that match the required quality product targets. However, it is important to examine the existing procedures from a statistical perspective, not only to critically evaluate their effectiveness, but also to translate them into more objective and reproducible process control rules. In the present work, we will describe the main steps being made in order to promote a smooth transition from the initial empirical-based modes of intervention to a more statistical sound approach. Such an evolution must be conducted very carefully in order to be successful, so that statistical tools become well understood and used only to the extent that they are believed to provide additional value to process engineers, reinforcing their knowledge and giving statistically sound support to themselves. Therefore, hybrid knowledge and statistically based approaches were developed, which help to understand under a statistical background how such a complex process is being monitored and controled. Through a partnership between CIRES (PVC plant located in Portugal) and an University based research group (GEPSI-PSE Group), we have built a project team that so far has: 1)Collected and characterized statistically all the relevant quality product features; 2)Identified the dominant correlations amongst product quality features; 3)Identified key critical product quality dimensions by Principal Components Analysis; 4)Been able to translate current process control practices into a set of statistically based criteria; 5)Used statistical thinking as the basis for coming up with a new approach for supporting process control actions, combining process knowledge with both adapted versions of univariate and multivariate statistical process control. In our presentation we will describe the different steps of this ongoing project, discussing not only the technical details associated with it but also some of the key issues associated with the management of a very complex process with many unkown sources of variability, like this one, where the strict application of standard techniques does not provide by itself a complete solution to the problems that have to be addressed daily in the plant.



89. Shift warping function estimations and application on road traffic forecasting
Authors: Elie Maza (Laboratoire de Statistique et Probabilités - Société TrafficFirst) and Jean-Michel Loubes
Keywords: classification, warping functions, curve registration
Format: presentation (Statistical modelling)
Contact: Elie.Maza@math.ups-tlse.fr

The purpose of the general study is the road traffic forecasting on highway networks. This work is supported by the TrafficFirst company (http://www.traffic-first.com) and his manager Christophe Communay. More precisely, the purpose is the short term forecasting of travel time on the Parisian highway network. Here, road traffic is described by the velocities of the vehicles. So, we aim at estimating road traffic at all the points of the observation grid. This observation grid is composed by all the measurement stations located on the road network. The methodology is based on a classification method. The Parisian road network infrastructure is composed by measurement stations, located approximately every 500 meters on the road network. These stations measure the road traffic evolution by calculating, for every fixed period and all the day, the mean velocity of the vehicles crossing them. These speed curves can be modelized as continuous functions. After a classification method is used, the speed curves are gather in a reduced number of clusters. So, the aim of this study is to calculate, for each cluster, the best representative profile. Indeed, because of shifts of user behaviors, the average curve isn't representative enough. So, we modelize the speed functions of each cluster by a Functional Shift Model and we use the Fourier Transformation to estimate the shift parameters of each curve. After that, a structural average can be calculated (see for example Functional Data Analysis, Ramsay J.O. and Silverman B.W., Springer Series in Statistics, 1997). This structural average has better representativeness than the average curve.



90. Analysis of data about customer's satisfaction in the build sector
Authors: YENIS MARISEL GONZÀLEZ MORA (UNIVERSITY OF LA LAGUNA) and María Mercedes Suárez Rancel
Keywords: control quality.
Format: poster (Business and economics)
Contact: ygonmor@gobiernodecanarias.org

Many business sectors try to get quality gestions systems based on the ISO rules. We try to develop an analysis about the questionaries in a apartment's complex in the south of Tenerife. This analysis is attemped in the System of Quality.



102. Why it can be rational not to immediately adopt a new process technology. Evidence from NC, CNC and Microprocessors
Author: Giuliana Battisti (Aston Business School)
Keywords: Technology transfer, NC, CNC, Microprocessors, Sample selection, truncation, censoring
Format: presentation (Statistical modelling)
Contact: g.battisti@aston.ac.uk

The process of technology transfer is a key step in the realization of benefits from technological change as it allows the firm to reduce costs and to remain competitive. However, very often, despite a new technology being ready available on the market, it can take decades before the majority of the firms within an industry adopt it (see Karshenas and Stoneman 1995, etc). Even slower is the replacement process of the old with a new technology within a firm (intra firm diffusion). The existing literature in the area of intra firm diffusion is quite scarce and even more so the availability of data. This paper, using a rare dataset on the adoption pattern of three process technologies (NC, CNC, Microprocessors) for a sample of UK engineering and manufacturing firms, investigates which are the main determinants of the delay in the adoption of the process technology and the heterogeneity of use within and across firms. Using the Battisti’s model (2000 and 2003), it disentangles the impact of learning, skills, firm characteristics, profitability considerations, price expectations, etc. upon the firm’s decision to further invest in a news technology. By the means of sophisticated statistical tools that account for truncation, censoring and sample selection, it identifies the main constraint and the main determinants of the speed of technology adoption and technology replacement at firm level for the three technologies in the sample. The empirical analysis, incorporating a modified Heckman two-stage procedure, shows that the adoption pattern is firm and technology specific. However, it does not reject the hypothesis that different user costs are a driving force for two out of the three technologies in the sample. A number of firm characteristics are also isolated as of special importance including firm size, and the use of complementary technologies and managerial techniques. There is little support for epidemic learning effects. These results have important policies implications upon technology transfer. Battisti G. (2000) ‘The Intra-firm diffusion of new process technologies’, PhD thesis, Warwick University. Battisti G. (2003) ‘Modelling the Current Outcome of an Irreversible Choice Made Sometimes in the Past. An Application to the Conditional Decision to Further Invest in a New Technology’, proceeding of the American Statistical Association, Joint Statistical Meeting, New York, Aug 2002 Battisti G. and P. Stoneman (Forthcoming) ‘Inter and Intra Firm Effects in the Diffusion of New Process Technologies’, Research Policy Karshenas M. and P. Stoneman (1995), ‘Technological Diffusion’ in P. Stoneman (ed.) Handbook of the Economics of Innovation and Technological Change, Blackwell, Cambridge



92. Service Quality: Some Tools for the Control
Authors: Massimiliano Giacalone () and Letizia La Tona- University of Messina -(Italy)
Keywords: service quality control, perceived quality, expected quality.
Format: presentation (Process modelling and control)
Contact: maxgiac@virgilio.it

This article presents some methods of measurement and control of the service quality. It focuses on the effects of data elaboration and interpretation for service quality monitoring. In the study of services quality we can distinguish two kinds of services: “experience goods” and “search goods”. The former includes service like instruction, attendance, culture, health, etc, whose definition of quality is given by the customer assessment and is not verifiable before the supply of the service, because of personal characteristics interferring on the result of the process. The latter includes services like transports, telephone, etc., whose definition of quality is verifiable before supplying of services, it is the same for all the customers as it doesn‘t depend on their personal characteristics. The quality assessment of the service is important, for their own specific competences and activities, for three subjects: a) the users of the service, b) the units that furnish the service, c) the corporations or organizations, if there are, that are institutionally competent to the evaluation of the unit productivity. The quality control is carried out on the total process, it starts from the “input” evaluation and finishes to the “outcome” evaluation. The service efficacy is related to several factors concerning the resources and the employed managerial abilities of the units, the characteristics of the context where the unit operates and, for the “experience goods”, the users personal abilities, economic status, etc, because of interaction with the process. Subjects b) and c) may be interested to relative efficacy, it is to say to the assessment of the same process supplied by various units, it can be obtained according to the following criteria: by means of a process comparison between units and by means of a process comparison with an a-priori standard. The performance evaluation of units that produce services may be analysed by several multivariate methods: descriptive methods or analyses of the dependency among variables. Indices also are indicative elaborations of the actions or situations to estimate. They are a synthesis of the process and have to possess some fundamental requirement, in particular, to measure all the aspects of the activity to estimate (Stiefel, 1997) and to be codified to attribute objectivity, or to reduce subjectivity (Wilson, 1992). The result obtained from the units can be defined from one variety of indices according to the following elements: a) characteristics of the customers who receive the service; b) characteristics of the context that may influence the process; c) entity of the resources of the units. The operations that take part in the definition of services quality are connected to the evaluation and measurement of three approach: expected quality, perceived quality and produced quality. Some preliminary results obtained from an application are discussed.



93. A Robust Graphical Test for Binormality
Authors: Oystein Evandt (ImPro) and Shirley Coleman (Industrial Statistics Research Unit, University of Newcastle upon Tyne, England); Harald E. Goldstein (Institute of Economics, University of Oslo, Norway); Maria F. Ramalhoto (Mathematics Department, Technical University of Lisbon, Portug
Keywords: Binormal Distribution, Outliers, Robust Graphical Test for Binormality, Conditional Prediction
Format: presentation (Statistical modelling)
Contact: oystein.evandt@c2i.net

Many bivariate statistical methods are helpful for understanding business and industrial data. For example, we may be presented with two sets of measurements on the same items and want to examine their relationship. Most methods are based on the assumption that the data come from a binormal distribution. Real datasets often contain some outliers. The reasons for outliers can be unclear procedures for production tasks or measurement, that operators do not follow procedures for production tasks or measurement, failure in production equipment or measurement equipment, wrong type of raw material, failure in raw material, registration errors etc. There is therefore a need for a straightforward test of binormality which is robust against outliers. It can be shown that even if both the marginal distributions of a bivariate distribution are normal, the bivariate distribution need not be binormal. In this paper is presented a graphical method, based on probability plotting, for assessing if it is reasonably realistic to assume that a bivariate dataset stems from an approximately binormal distribution. The method is robust against a moderate number of outliers. A particularly important application is to regression where both variables are random, but one is easier to measure than the other. With appropriate assumptions, the measure that is easier to obtain can be used to predict the measure that is more difficult to obtain. The relationship with testing for normality of residuals in regression is discussed. The robust graphical (Robug) test is illustrated using data sets encountered in our practical work.



94. IT Investment Performance in Japan
Authors: Kazumi WADA () and Yasuo KADONO(Graduate School of Business Science, University of Tsukuba, Tokyo /Management Science Institute Inc. ); Hiroe TSUBAKI (Graduate School of Business Science, University of Tsukuba, Tokyo)
Keywords: IT investment , New Economy, IT productivity paradox
Format: presentation (Business and economics)
Contact: wkazumi@attglobal.net

Management Science Institute Inc. and the University of Tsukuba jointly conducted a series of enterprise surveys focusing on the relation between business management and information technology in the autumn of 2002. About 1000 questionnaires were sent to the firms with more than 1000 employees in commercial industry in Japan, and another 1000 questionnaires to the firms with 50 to 999 employees. Then each survey data of replied 494 firms were linked with the latest financial statements. This paper suggests measuring the performance of a firm by the micro production function analysis on the above data. The authors set the following two hypothesis as follows: -\ the sales of a commercial firm can be significantly affected by its number of employees, IT investment and non-IT investment, and -\ the residuals, i.e. the difference between estimated sales from those three independent variables and real sales can be used to measure each firmfs performance. In order to verify the first hypothesis they fit a trans-log production function model with the interaction items between number of employees and two other independent variables respectively. They also look into if the groups of firms divided by the following factors have any difference in their residuals to illustrate the effectiveness of the proposed residual analyses; -\ industry -\ PC deployment -\ IT investment satisfaction -\ number of ranks within a firm -\ Introduction of ERP package \



96. D-optimal Two-level Factorial Designs for the Logistic Regression Model with two Design Variables
Authors: Roberto Dorta-Guerra (University of La Laguna) and Gonzalez-Davila, E (University of La Laguna); Ginebra, J. (Universitat Politecnica de Catalunya)
Keywords: Logistic Regression; Factorial Design; D-Optimality;
Format: presentation (Design of experiments)
Contact: rodorta@ull.es

When the response of an experiment is binary, (defective or non-defective), it is most natural to analyze its results through logistic regression models. The optimal designs for those models are more difficult to identify than for the linear model, in part because their information matrix depends on the unknown regression parameters. There is some work on the characterization of D-optimal designs, (maximizing the determinant of the information matrix), when the experimental region is bounded, but they are often difficult to compute and therefore they are not easy to implement in practice. Instead, the standard two-level factorial designs are much simpler to implement and analyze, and are familiar to most quality improvement practitioners, because they are optimal in many linear model settings and because of their usefulness at the early screening stages. For two design variables, the determinant of the information matrix of two-level factorial designs on logistic regression models is computed as a function of the parameters of the model, of the location and the range of the design and of the total number of trials on each experimental condition. That allows one to identify the D-optimal two-level factorial design in each circumstance, and its efficiency relative to the (absolute) D-optimal designs proposed in the design of experiments literature. The relative efficiency of factorial designs with center points is also explored.



97. Root Cause Analysis of Paper Machine Breaks using Mahalanobis-Taguchi System
Authors: Paulo Penim (SGIE) and Paulo Marques-Penim
Keywords: SGIE
Format: presentation (Six Sigma and quality improvement)
Contact: ppenim@mail.telepac.pt

Performance of Paper Production processes are greatly influenced by the absence of Paper Machine (PM) breaks - low yields and poor productivity are the result of high levels of breaks on PM´s. This paper describes the utilization of Mahalanobis-Taguchi System (MTS) for Root Cause Analysis of PM breaks in a major Portuguese Pulp and Paper producer. MTS is a data analysis methodology developed by Genichi Taguchi that allows diagnosis and forecasting using multivariate data. In MTS a Reference Group (RG) is defined (PM without breaks) and Mahalanobis Distance is used to evaluate the degree of abnormality (PM with Breaks) of subjects outside the RG. In the next step Orthogonal Arrays were used to find the characteristics necessary to explain Machine breaks. Most common Problem Solving techniques (7 tools) plus poor data analysis were the traditional approach used. MTS proved to be of great help to direct team efforts on root cause finding and reducing problem incidence.



98. A note on the use of signal-to-noise ratio to reduce variation
Author: Anthony Cossari (Dipartimento di Economia e Statistica - Universitàdella Calabria)
Keywords: Parameter design, signal-to-noise ratio, variation reduction
Format: poster (Design of experiments)
Contact: a.cossari@unical.it

Robust parameter design, introduced by Taguchi, is used for identifying factor settings to reduce variation. One of the central ideas is the use of the Signal-to-Noise (SN) ratio as a measure of variability. Extensive research in the past has shown that its use is appropriate only for the special case in which the standard deviation of the response variable is proportional to the mean. Moreover, in a transformation approach, the analysis could be conducted more simply using the standard deviation of logged data as a measure of variability. For other kinds of dependence between the standard deviation and the mean, including no dependence, kinds of transformation other than the log can be suggested by data, validating the standard normal model. Analysis of dispersion can be made using the standard deviation, or preferably the logged standard deviation, of transformed data as a measure of variability; SN ratio ought not to be used in these situations. In this work, through a simulation study, I show that universal use of the SN ratio can be associated with a loss of information in the inferential process. For data normally distributed, with no transformation required, I suppose to be interested in a test on variance and, alternatively, in a test on the coefficient of variation (CV), essentially equivalent to the SN ratio. The simulation experiment is made to obtain the power curves associated with the two alternative tests. They suggest that the efficiency associated with the use of the CV can be quite low.



99. An optimal allocation of observations for detecting a convexity pattern
Authors: Chihiro Hirotsu (Meisei University) and Masaru Ushijima (Foundation for Cancer Research)
Keywords: Dose-response analysis, Optimal allocation, Shape constraints
Format: presentation (Design of experiments)
Contact: hirotsu@ge.meisei-u.ac.jp

There exists a large literature concerning statistical inferences on ordered parameters. Only a few papers are, however, dealing with the optimal allocation of observations, which include Hirotsu & Herzberg(1987) and Hirotsu(2002) for a monotone hypothesis. In the present paper we introduce a maximin design which maximizes the minimum power for a convexity hypothesis within a class of linear tests and compare its power with the balanced design and related tests. We assume a simple one-way layout model , G and consider testing a convexity hypothesis Then in the case of balanced design Hirotsu & Marumo(2002) recently proposed a max t test and compared its power with the maximin linear test. If an unbalanced design is allowed we can further improve those powers. The optimum design will allocate the one fourth of observations at both ends and one half of observations at the center of treatment levels. However, such a design cannot be used actually since no information is available for other treatment levels. We therefore consider a maximin design which allocates as many observations as possible to other levels keeping the minimum power maximal. An interesting finding is that the least favorable case for the maximin design changes according to the number of treatments. Finally the power of the related maximin linear test is compared with those in the balanced case. References Hirotsu, C. (2002). J. Statist. Planning and Inference 106, 205-213. Hirotsu, C. and Herzberg, A. M. (1987). Australian J. Statistics 29, 151-165. Hirotsu, C. and Marumo, K. (2002). Scandinavian J.Statist 29, 125-138.



100. A hierarchical model for estimating human intake of heavy metals through the clams
Author: Dhaif AlMutairi (Kuwait University)
Keywords: Hierarchical model, Mixture distribution, Clams, Marine biology
Format: presentation (Statistical modelling)
Contact: mutairid@yahoo.com

A statistical methodology to estimate human intake of heavy metals through the clams is presented in two stages. In stage one, we suggest the use of a probability distribution that has a discrete and an absolutely continuous components as an appropriate model for the accumulation levels of heavy metals in any particular tissue of the clam for any period and we apply the method of maximum likelihood to estimate the parameters of this distribution. The probability distribution of human intake of heavy metals is presented in stage two as a convolution of the modified form of stage one probability distribution. The usefulness of our methodology is illustrated on a real set of marine biology data.



105. Interpretation of signals from runs rules in Shewhart control charts
Author: Albert Trip (IBIS UvA)
Keywords: Control charts; runs rules
Format: presentation (Process modelling and control)
Contact: atrip@saralee-de.com

Runs rules are useful supplements to Shewhart control charts for early detection of out-of-control processes. A practical advantage above the often more powerful cumulative sum or exponentially weighted moving average charts is that they are comprehended by shop floor personnel, and therefore runs rules are used in practice. But runs rules have an additional advantage: they may facilitate problem solving when the process is out of control. The reason is that different failure causes result in different effects on the parameter in the control chart, some causes will increase the variation, others lead to a shift of the mean, and even both mean and variation may be affected. In a system with several runs rules, some will be more powerful to a certain effect than others. The average run length (ARL) behaviour tells us which rule is the most powerful in a particular situation. These two sources of information are combined in a Bayesian-like approach to improve the Out-of-Control Action Plan. A case study illustrates the method.



106. Functional Signature Analysis for Product End-of-Life Management
Authors: Alessandro Di Bucchianico (EURANDOM) and H.P. Wynn (EURANDOM and London School of Economics, U.K.)
Keywords: reliability, signature analysis
Format: presentation (Reliability and safety)
Contact: a.d.bucchianico@tue.nl

This paper presents functional signature analysis for the end-of-life management of products, particularly for electrical and electronics applications. The signature analysis method is described, and the strategies for carrying out the signature analysis are discussed. We will discuss a case study which involves digital copiers and highlight the statistical issues.



107. Failure Amplification Method (FAMe): An Information Maximization Approach to Categorical Response Optimization
Authors: Jeff Wu (U of Michigan) and Roshan Joseph, Georgia Tech
Keywords: robust design, accelerated testing, operating window
Format: presentation (Design of experiments)
Contact: jeffwu@umich.edu

Categorical data arises quite often in industrial experiments because of an expensive or inadequate measurement system for obtaining continuous data. When the failure probability/defect rate is small, experiments with categorical data provide little information regarding the effect of factors of interests and are generally not useful for product/process optimization. We propose an engineering-statistical framework for categorical response optimization that overcomes the inherent problems associated with categorical data. The basic idea is to select a factor that has a known effect on the response and use it to amplify the failure probability so as to maximize the information in the experiment. New modeling and optimization methods are developed. FAMe generalizes and supersedes the "operating window method" (originally proposed by Don Clausing) as it provides more information than the OW method. It is illustrated with two real experiments. The paper is available at http://www.stat.lsa.umich.edu/~jeffwu/publication/publication.html



108. Kernel Based Algorithms for Industrial Applications
Author: Daniel J.L. Herrmann (Robert Bosch GmbH)
Keywords: Kernels classification
Format: presentation (Statistical modelling)
Contact: daniel.herrmann@de.bosch.com

Industrial application of kernel based methods like support vector maschines has become very successful in the last years. The advantage of this type of learning algorithm is that it yields a convex optimization problem and its generalization ability can be measured by the capacity of the function class which the learning algorithm can implement. This is a young and very active academic research field and there still some important open questions. Purpose of this work is to explain recent results presented by the author at the NIPS Conference 2002 and ongoing research to a more application oriented audience. The success of SVM can be attributed to the joint use of a robust classification procedure (large margin hyperplane) and of a convenient and versatile way of (nonlinear) preprocessing the data (kernels). It turns out that with such a decomposition of the learning process into preprocessing and linear classification, the performance highly depends on the preprocessing and much less on the linear classification algorithm to be used (e.g. the kernel perceptron has been shown to have comparable performance to SVM with the same kernel). It is thus of high importance to have a criterion to choose the suitable kernel for a given problem. Ideally, this choice should be dictated by the data itself and the kernel should be 'learned' from the data. The simplest way of doing so is to choose a parametric family of kernels (such as polynomial or Gaussian) and to choose the values of the parameters by cross-validation. However this approach is clearly limited to a small number of parameters and requires the use of extra data. We propose to use a gradient based procedures for optimizing the coefficients kernel. We explain in this work how to implement this idea and give theorical bounds on the corresponding generalization error for different classes of kernels. Recent applications to geometric optimization problems in micromechanical sensors chips will be discussed.



109. A Bayesian multi-fractal model with application to analysis and simulation of disk usage
Authors: Fabrizio Ruggeri (CNR IMATI) and Bruno Sanso' (University of California at Santa Cruz, USA)
Keywords: wavelets, disk usage, performane of storage systems
Format: presentation (Statistical modelling)
Contact: fabrizio@mi.imati.cnr.it

Evaluating the performance of storage systems is a key aspect of the design and implementation of computers with heavy I/O workloads. Due to the difficulties and expenses involved in obtaining actual measurements of disk usage, disk performance simulators are usually fed with synthetic traces. As in many areas of computing and telecommunications, the time processes of disk usage exhibit dependencies that span over long ranges. In addition to the slow decaying of the auto-correlation function, the series have a bursty behaviour that can not usually be captured by commonly used times series methods. Also, when the number of packets, as opposed to inter-arrival times, is the variable of interest, a distribution with a point mass at zero has to be considered, since there is positive probability that no activity is observed during a given unit of time. Multiplicative cascade models have been considered in the literature as a way of capturing the bursty behaviour of the series. Such models have multi-fractal properties providing a rich structure that is able to capture the behaviour of series of disk usage. In this paper we present a Bayesian multi-scale modeling framework consisting of a multiplicative cascade, based on Haar wavelet transforms. We analyse data recorded by the Department of Storage and Content of HP Laboratories. We use the data to estimate the parameters of the model and provide predictive distributions from which to draw simulations that mimick the actual data. Also, we consider a Bayesian estimator of the multi-fractal spectrum of the original data.



110. Looking back at a multinomial process to determine a change in the probabilities
Authors: alex riba (Dep. Statistics Universitat Politecnica de Catalunya) and Josep Ginebra (Universitat Politècnica de Catalunya)
Keywords: Change point, Multinomial Distribution, Generalized Linear Models, Chi-square distance, Gamma distribution.
Format: presentation (Process modelling and control)
Contact: alex.riba@upc.es

When the output of a process can be classified into l mutually exclusive and exhaustive categories it can be modelled in terms of a sequence of multinomial distributions. Examples of that are sales in different markets or of different products, the rates of failure or nonconformities classified in different degrees. When the goal is to detect changes in the proportion of observations in each category, one can monitor the process either by using one p-control chart for each category. As an alternative, one can also set up a single Chi-square chart that summarizes the evolution of the whole process through the Chi-square distance between each multinomial observation and the overall mean. The latter is particularly useful as a management tool since a single number provides an overall picture of the process. Sometimes, at some point in the sequence there is a shift in the process and the parameters of the multinomial distribution change suddenly. We propose locating this shift by looking for the change point in the multinomial sequence. That can be done by fitting all the gamma regression models with a step change at stage r of the sequence of Chi-squared statistics, and searching for the change point that better fits the data. By representing the value of the deviance of all these models as a function of r, one obtains the likelihood function for r. We illustrate the use of this methodology on the identification of a boundary of style in Tirant lo Blanc, a medieval book in catalan, by classifying words according to their length and by monitoring the use of the 25 most common context free words.



111. A practical Data Mining solution in marketing
Author: Andrea Ahlemeyer-Stubbe (Ahlemeyer-Stubbe, Business Intelligence, Marketing & More)
Keywords: Data Mining, Database Marketing, CRM
Format: presentation (Data mining)
Contact: ahlemeyer@ahlemeyer-stubbe.de

In the first part the talk gives a short introduction on Database Marketing(DBM) and Customer Relationship Management (CRM), I like point out where it is related to Data Mining and how Data Mining results are influencing the success of DBM and CRM. The main part of the talk based on a real Database Marketing problem. I like to show by using the Data Mining Process how a Data Mining Project is handled in practices and which problems has to by solved during the Data Mining Process to get a useful result for the marketing people.



112. Comparison of alternative modelling strategies for data from a large experiment.
Authors: Lourdes Rodero (Technical University of Catalonia (UPC).) and Josep Ginebra
Keywords:
Format: poster (Statistical modelling)
Contact: lourdes.rodero@upc.es

In almost all statistical problems faced in practice, the function relating the response variable to the explanatory variables is unknown, and one needs to use empirical models that approximate it locally. In that setting, it is well recognized that statistical models are never true but some models are more useful than others. In a very large experiment carried out in South Catalonia, data was gathered in order to compare five attractive substances to be used to capture female fruit flies through massive trapping. That avoids them becoming a plague without the need to use insecticides. Several modelling strategies are compared, starting from the simplest possible ones. Even though almost all the models used are clearly wrong, the conclusions reached from many of them agree with the most sophisticated analysis, thus proving that one does not need to tie down all the tiny statistical details in order to reach the right conclusions from an experiment.



113. OPTIMISING CLINICAL TRIALS DESIGN USING SIMULATION: Retrospective validation
Authors: Ismail Abbas (Department of Statistics and Operations Research, Technical University of Catalunya) and Joan Romeu (Hospital Universitari Germans Trias i Pujol), Erik Cobo (Department of Statistics and Operations Research, UPC), Josep Casanovas (Department of Statistics and Operations Research, UPC), Toni Monleon, Jordi Ocaña (Department of statistics, UB)
Keywords: Simulation models, model validation, mixed models, variability analysis, clinical trials
Format: presentation (Statistical modelling)
Contact: esmail@fib.upc.es

INTRODUCTION When carrying out a simulation models to optimize the design of a clinical trial, it is important to choose the best model that reproduces the real outputs of the clinical trial with similar variability. Choosing the wrong model leads to wrong conclusions of the trial. The aim of this work is to construct a simulation model for selecting and calibrating statistical model that reproduces correctly the real data of clinical trial based on a 10% level of variability. METHOD We collect data from AIDS clinical trial that compare the effects of two different treatments on the change of cholesterol levels in patients with lipidestrofia. We construct a dynamic simulation model based on two different statistical models that can be candidates to predict the results of the trial. These models are: 1) Regression model and 2) Mixed model. RESULTS Based on the predefined level of variability, we found out that the first model reproduces the real data only at one output. Whereas, the second model reproduces data at four outputs points independently. CONCLUSION Our simulation model is a very good tool for finding the model that better reflects the results of real data. The results of the second model that is based on time-constant variability and constant correlation showed a better adjustment. However, the simulation model can be refined to gain more credibility by reducing the discrepancies of variability up to 1%.



114. Experimentation Order in Factorial Designs
Authors: Guillermo de León (Universitat Politècnica de Catalunya) and Grima, Pere & Tort-Martorell, Xavier
Keywords: Randomization; experimentation order; factorial design; minimum number of level changes; bias protection
Format: presentation (Design of experiments)
Contact: guillermo.de.leon@upc.es

It is normally taken for granted that the order of execution of a factorial design should be random. The aim of randomization is to protect the estimates from the possible influence of variables that are unknown and therefore uncontrolled. Randomization attempts to distribute the influence of these uncontrolled variables between the factors being studied so that they affect the final conclusions as little as possible. However, to what extent does randomization protect against these risks? In addition, randomization can induce a big number of changes in factor levels and thus make experimentation expensive and difficult. This subject has already been studied by several authors, but these results have not been incorporated, or at least not in a formal manner, into the texts and the performance rules that are usually considered correct in the planning of experimental design. This paper revisits this subject, demonstrates its importance and gives tables that, under some hypotheses, we believe may be useful in choosing run orders that are easy to carry out -with a minimum number of changes in the factor levels- without giving up the objectives that randomization aims to achieve.



115. Mean versus Deviation graph in the Analysis of Robust Parameter Design Experiments
Authors: Guillermo de León (Universitat Politècnica de Catalunya) and Grima, Pere
Keywords: Design of experiments; quality improvement; Mean Deviation graph; Robust product; variation reduction.
Format: presentation (Six Sigma and quality improvement)
Contact: guillermo.de.leon@upc.es

A sound engineering practice for improving quality and productivity is to design quality into products and processes. The ideas of G. Taguchi about parameter design were introduced some years ago. Despite some strong controversy about some aspects of it, they play a vital role in the concept of robustness in the design of industrial products an processes. In this paper, we present a new methodology for designing products and processes that are robust to variation in environmental or internal variables. First, a tentative model for the response as a function of the design and noise factors is assumed. This model is then estimated using a single design matrix and the expected value and the variance of the response are calculated over the space of the design factors. Finally, the best setting of the parameters values can be located in a newly developed bivariate plot where the mean of the response is plotted against the variance. We illustrate the implementation possibilities and, finally, reveal its flexibility to relate to factor costs.



116. Gaussian Modelling of Non-Gaussian Time Series
Authors: Dimitris Kugiumtzis (Aristotle University of Thessaloniki) and Efthimia Bora-Senta
Keywords: time series, non-Gaussian, nonlinearity, surrogate data test
Format: presentation (Statistical modelling)
Contact: dkugiu@gen.auth.gr

A framework that allows the use of Gaussian linear analysis on non-Gaussian time series is proposed. The idea is to approximate first the transform that renders the marginal distribution Gaussian, and from this transform to determine the autocorrelation of the Gaussian time series as a function of the original one. The approximation of the transform is chosen to be piecewise polynomial and the moments of the truncated normal distribution are used to determine the relationship for the autocorrelations. The derived Gaussian time series has the property that through the inverse transform it possesses the same linear correlations and marginal distribution as the original time series. Thus the standard linear analysis and modeling can be performed on this Gaussian time series and the results of the analysis, passed through the inverse transform, can yield the original non-Gaussian time series. This approach is particularly useful for the surrogate data test for nonlinearity which relies heavily on the generation of proper surrogate time series that possess the linear correlations and marginal distribution of a given time series. The importance of this approach both for the linear modeling of time series and for the surrogate data test for the nonlinearity will be illustrated with some real world time series from finance and physiology.



118. Optimization of a Brake Prototype as Consequence of a Successful DOE Training
Authors: Lluis Marco (Technical University of Catalonia (UPC)) and Juan Cuadrado (Robert Bosch Braking Systems); Xavier Tort-Martorell (Technical University of Catalonia, UPC)
Keywords: Factorial designs, training by doing, outliers
Format: presentation (Statistical consulting)
Contact: lluis.marco@upc.es

The purpose of this presentation is to explain our experience in a company that produces car brakes. Engineers in the company make research and implement changes in order to improve brake performance and conduct tests for checking if brakes fulfil standards. That is a company that can profit from an extensive use of applied statistics and up to this point they were only making limited use. They were aware of this and wanted to deepen their statistical skills. We organised a course with an overview of topics that both them and us thought will be useful (basic tools for improvement, process capability, reliability) but with a special focus on design of experiments. Those attending the course were highly motivated and received our explanations always having in mind that they will apply them. As part of the course we used both a simulator (wheels case, pro-ENBIS EC contract number G6RT-CT-2001-05059) and a real experiment to link practical action with theoretical lessons. They chose to experiment with a new prototype they had been working for the last weeks but not in such an organised way as they had just learnt. Before conducting the experiment, we expend time discussing and writing down all the previous knowledge they had. A 2^(5-1)factorial design was conducted. When analysing the results, we suspected of one result being anomalous. In order to confirm that suspicion some runs were repeated, and were sufficient to make the final conclusions. This first experience with DOE was successful and gave engineers the confidence to use design of experiments on a regular basis.



119. A response surface experiment to investigate the effect of process factors on the attrition of drug-on-core layered beads.
Authors: Debbie Kraus (Pfizer Global R&D) and Kevin Hughes
Keywords: Central composite design, attrition, drug delivery
Format: poster (Design of experiments)
Contact: debbie_kraus@sandwich.pfizer.com

A method of drug delivery used in the pharmaceutical industry is to spray drug substance, as an aqueous solution/suspension, onto sugar sphere cores (typically 0.5-0.6 mm diameter). The release rate of the drug-on-core (DOC) layered beads can be modified by subsequent film-coating. The major benefits of the approach are formulation and release profile flexibility. The beads are normally encapsulated and swallowed by the patient. The process of DOC layering involves fluidizing and circulating the beads in a chamber whilst the drug solution/suspension is sprayed. During processing, beads are susceptible to attrition, especially from the high velocity spray, which is undesirable because potency can be reduced. A face-centred central composite design with a replicated centrepoint was used to investigate the effects of four factors during fluidization: atomizing air pressure, moisture content of beads, batch mass and fluidization time. The key responses measured were attrition and loss of potency. For both responses the factors having the largest effect were batch mass and atomizing air pressure. An experimental region was found in which both attrition and loss of potency were deemed to be acceptable.



121. A little known Robust Estimator of the Correlation Coefficient in the Bivariate Normal Distribution
Authors: Oystein Evandt (ImPro) and Shirley Coleman (Industrial Statistics Research Unit, University of Newcastle upon Tyne, England)
Keywords: Bivariate Normal Distribution, Outliers, Robust Estimator of the Correlation Coefficient, Alternatives to Pearson's r
Format: presentation (Statistical modelling)
Contact: oystein.evandt@c2i.net

The “usual” empirical correlation coefficient, Pearson’s r, has good optimality properties as an estimator of the distribution correlation coefficient “rho” in the bivariate normal distribution, provided outliers are not present. But outliers often influence r so much that it leads to very bad estimates of “rho”. The most frequently used alternatives to r, in the presence of outliers, are probably Spearman’s rank correlation coefficient and Kendall’s “tau”. These two correlation coefficients both however have the drawback that they do not estimate the bivariate distribution correlation coefficient “rho”, neither in the case of binormality nor for other distributions. Furthermore, Spearman’s coefficient has been found not to be very robust against outliers, even if it is more robust than Pearson’s r. Kendall’s “tau” is however robust against outliers. In addition, in the case of binormality there exists a robust estimator of “rho” based on Kendall’s “tau”, which has good properties. This estimator seems to be little known among statisticians and statistical practitioners, even though it is described in Maurice Kendall’s book “Rank correlation methods” from 1975. In the opinion of these authors it is likely that this estimator deserves more attention than it seems to have, and to be used more. This paper will describe the mentioned estimator and some of its properties. Some of its properties are presented in theoretical terms, and some are presented as results of simulation studies.



122. Experimental study of CMM probe performance according to ISO 10360-2 standard
Authors: Biagio Palumbo (Universitàdi Napoli) and Caiazzo F. (Universitàdi Salerno), Coppola G. (Universitàdi Salerno), Sergi V. (Universitàdi Salerno)
Keywords: Coordinate measuring machines, CMM probe performance, Design of experiments
Format: presentation (Design of experiments)
Contact: biagio.palumbo@unina.it

National and international standards propose several methods for checking CMM probe performance. In this work we take in consideration the method proposed in the international standard ISO 10360-2. It recommends to measure 25 uniformly spaced point, in a random order, over a hemisphere of a certificated reference test ball having a diameter between 10 and 50 mm. Using all 25 measurement a best fit (least squares) is computed and the set of radii to each probing point (taken from the best fit centre) is calculated. The range of all 25 radii is set as an indicator of CMM probe performance. In this standard there are several factors that aren’t explicitly defined, as the sampling method, the probe configuration and orientation, and the measurement speed. We call these factors as “free factors”. The aim of this work is to investigate how the “free factors” can influence the CMM probe performance. This experimental study will be executed by a systematic approach based on the Design of Experiment (DOE) and the results will be evaluated using statistical analysis. In literature, many papers are available about experimental studies on CMM probe performance. The specific contribution of this work is to propose a systematic strategy of experimentation and to use a touch-trigger probe that isn’t of kinematic type but based on electronic strain gauge technology. This type of probe ensures that the trigger event occurs following a constant pre-travel, regardless of probing direction, largely eliminating the lobing effect of kinematics probes. The experimental tests are carried out on a DEA Brown & Sharpe Mistral in the Metrology Laboratory of the Department of Mechanical Engineering at University of Salerno.



126. AN APPLICATION OF STRUCTURAL MODELING EQUATION (SEM) TO MEASUREMENT CUSTOMER SATISFACTION
Authors: Mónica Martínez Gómez (Polithecnic University of Valencia) and Jose Miguel Carot; Pau Miró i Martínez; José Jabaloyes Vivas
Keywords: Structural Equation Modeling; Confirmatory Factor Analysis; Customer Satisfaction
Format: presentation (Statistical consulting)
Contact: momargo@eio.upv.es

Addressing processes to the users for getting to satisfy all their needs have become an essential means for enterprises to come on being competitive in the actual system of market. Changes to realize in the processes used in the delivery of the service, should be based on a deep knowledge about the users needs and expectations, to be effective. However, to know the level of satisfaction that a service produce in users is not easy. Nowadays the evaluation of perception measurements (general image, products and services, loyalty to the enterprise, etc.) that have the users of one organization, is carried usually out from the analysis of survey results. In this sense, the customers opinion questionnaire is the main instrument to know customers satisfaction and is essential to be sure that it is correctly used This paper shows how the Estructural Equation Modeling (SEM) can be applied to know the suitability of the questionnaire. The study objective was to to develop an efficient methodology to verify the suitability of the questionnaire used in a service for the desirable purposes. With the service attributes initially identified it was accomplished a factorial confirmatory analysis to determine the underlying structure of the data. Firstly, in a exploratory study, it was determined the number of factors (dimensions of appraisal). The results obtained in the study were the basis to implement the improvement actions in the service.