Cost effective screening experiments

Susan Lewis
University of Southampton, UK

Discovery and development in science and industry often involves investigation of many features or factors that could potentially affect the performance of a product or process. In factor screening, designed experiments are used to identify efficiently the few features that influence key properties of the system under study. A brief overview of this broad area will be presented. This will be followed by discussion of a variety of methods with particular emphasis on industrial screening. Ideas will be motivated and illustrated through examples, including a case study from the automotive industry.

Split-plot Screening Designs with Minimal Numbers of Whole Plots

Bradley Jones
SAS Institute

Screening experiments typically have two levels for every factor. In the case where one factor is extremely difficult to change, it is tempting for an operator to sort the randomized design by that factor so that it only changes once. In such a case, there is no way to test the effect of the whole plot factor. Clearly this is undesirable from a statistical point of view. Nevertheless, it is useful to see what can be done both in the design and analysis of such experiments. This talk explores several split-plot designs having only two whole plots and suggests a way to screen for whole plot effects.

Use of the Statistical Method in Variation Reduction Projects

Stefan Steiner
University of Waterloo

Industrial problem solving or variation reduction is best accomplished by following a process improvement system, such as DMAIC (Define, Measure, Analyze, Improve, Control) in Six Sigma. Applying any process improvement system requires a series of empirical investigations where we collect data to learn more about the process. To conduct effective empirical investigations we suggest following another systematic approach we call QPDAC (Question, Plan, Data, Analysis and Conclusion) or the Statistical Method. In this talk we explore the important relationship and synergy between problem solving systems and the Statistical Method. The main ideas are illustrated with examples and a virtual process game.

Case-Based Reasoning and the Statistical Challenges

Petra Perner
Institute of Computer Vision and applied Computer Sciences, IBaI

www.ibai-institut.de

Case-based reasoning solves problems using the already stored knowledge, and captures new knowledge, making it immediately available for solving the next problem. Therefore, case-based reasoning can be seen as a method for problem solving, and also as a method to capture new experience and make it immediately available for problem solving. It can be seen as a learning and knowledge-discovery approach, since it can capture from new experience some general knowledge, such as case classes, prototypes and some higher-level concept.

The idea of case-based reasoning originally came from the cognitive science community which discovered that people are rather reasoning on formerly successfully solved cases than on general rules. The case-based reasoning community aims to develop computer models that follow this cognitive process. For many application areas computer models have been successfully developed, which were based on case-based reasoning, such as signal/image processing and interpretation tasks, help-desk applications, medical applications and E-commerce product-selling systems.

In this talk we will explain the case-based reasoning process scheme. We will show what kinds of methods are necessary to provide all the functions for such a computer model. We will develop the bridge between case-based reasoning and statistics. Examples will be given based on signal-interpreting applications. Finally, we will show recent new developments and we will give an outline for further work.

DoE in Engine Development

Karsten Röke
IAV GmbH

Stricter legal emission limits and increasing customer expectations lead to a growing number of controllable engine components and thus to a higher engine control complexity. For engine development, however, this means a much higher time and effort to find the optimal combination of all selectable parameters.

This trend can be observed in the field of Gasoline as well as for Diesel engines. At the same time, the development time from the first idea up to the introduction of a new production engine has become even shorter, and the costs have to be reduced.

Since the number of measuring points required for complete operational-test measurements rises exponentially with the number of input variables, it is quite obvious that full factorial measurements are not longer possible. Therefore the method “Design of Experiments” (DoE) is widely accepted as a suitable tool in the automotive sector and its supplying industry. Likewise, this method is broadly applied in the IAV during the advanced development stage up to the production engine applications. An extensive knowledge on the practical every day usage of DoE exists, after this method has successfully passed the test and experimental phase. Whereas DoE is used mainly in the area of steady-state applications recent research work shows a great potential also to optimize transient engine behavior.

This presentation will give an overview about the usage of statistical methods (mainly Design of Experiments) in the production engine calibration. "Engine calibration" is the term for finding the optimal settings of the engine controller unit; optimal in terms of minimal emissions, minimal fuel consumption, good drivability and other brand specific goals.