Workshops during the conference

As part of the regular programme of the Newcastle conference, there will be a number of workshops. There is no additional fee for these workshops, but you have to register in advance. Note: there are also pre- and postconference workshops.




Research methods in practice: Customer and employee opinion surveys

Date: Wednesday afternoon (session 2a)
Trainers: Dr. Irena Ograjenšek, University of Ljubljana, Faculty of Economics, Slovenia,
and Lance Mitchell, Barclays Bank & Greenfield Research, UK

This mini workshop focuses on two issues of vital importance for management: (a) how to effectively listen to customers and (b) how to effectively listen to employees. Opinion surveys are a primary tool of effective listening in practice. The challenge of their development, application, and, most importantly, analysis of results, will be discussed using a set of real-life examples.

The examples will demonstrate good and bad approaches to collecting the data and highlight some of the problems encountered whilst using surveys to manage continous quality improvement. Participants will get the chance to apply the lessons learnt in this workshop by designing some survey questions to meet pre-set objectives.

The workshop is targeted at researchers and decision-makers from both the private and public sector.


Date: Thursday, in three parts (sessions 3a, 4a and 5a)
Trainer: Chris McCollin

Part 1: Reliability Enhancement Methodology and Modelling
with Lesley Walls.

Workshop Goals:

  1. Awareness of processes being used for reliability enhancement in product development
  2. Insight into challenges and pitfalls of putting process into practice
  3. Understanding of how statistical modelling can be used to support engineering decisions about reliability

Topics Covered:
Principles and need for REMM
REMM process - modelling infrastructure and linkages with engineering decisions
REMM model- assumptions, model formulation, data requirements, estimates
Applications - 1 or 2 cases and show how used
Summary - including validation issues

Part 2: Fault Tree Modelling Using Binary Decision Diagrams
with John Andrews

Workshop Goals:
The workshop will introduce and define the Binary Decision Diagram (BDD). It will then demonstrate the means by which the BDD can be constructed and used to evaluate fault trees as an efficient and accurate alternative to conventional methods (kinetic tree theory) developed in the 1960's. Qualitative fault tree analysis will be discussed to provide system failure mode minimal cut sets. This will be extended to quantitative methods used to obtain system failure probabilities and frequencies and component importance measures.

What delegates will get from the workshop:
The workshop will show how Binary Decision Diagrams can be used to evaluate system failure mode probabilities and frequencies where the failure logic is expressed in the form of a fault tree. The qualitative and quantitative analysis approaches will be illustrated and compared with the conventional approaches which work directly with the fault tree structure. Why improvements in accuracy and efficiency are delivered by the BDD method will also be discussed.

Topics Covered:
Alternative system failure logic representations
Minimal cut set definition and evaluation methods for FTA and BDDs
Top event probability calculations
Top event Frequency calculations
Importance measures
Potential problems when using BDDs
Application areas
Current research activity on BDDs

Binary Decision Diagrams (BDDs) provide a solution method for Fault Trees. The failure logic of a system can be concisely expressed as a fault tree. This method of analysis was first conceived in the 1960's and provides a good representation of the system from an engineering viewpoint. However this form of the failure logic func tion does not lend itself to easy and accurate mathematical manipulation. A more convenient form for the logic function from the mathematical viewpoint is that of a Binary Decision Diagram.
With fault tree methods the analysis is performed in two stages producing qualitative results (minimal cut sets) and quantitative results (top event failure probability or failure frequency). It is also possible to generate this information from the Binary Decision Diagram in which case there are certain advantageous which can b e exploited.
The BDD is an efficient method, which does not require the evaluation of minimal cut sets as an intermediate stage for system quantification. The accuracy of the BDD method, where the system failure probability and frequency can be calculated without the need to use approximations, is also an important characteristic. These are the advantages of the BDD method. The disadvantage or trade-off for this is that the basic events in the fault tree have to be placed in an ordering required to construct the BDD. A good ordering gives an concise BDD form. A bad ordering may lead to an explosion in the size of the BDD used to represent the fault tree.
With the BDD method now developed there are many potential advantages offered when performing non-coherent fault tree analysis, phased-mission analysis or event tree analysis.

Part 3: Competing Risks
with Tim Bedford

Workshop Goal:
To introduce the participants to the concepts of independent and dependent competing risks in reliability problems and to practical modelling tools

What participants will get out of the workshop:
1) An understanding of what right-censored reliability data is and why it arises in practical situations
2) An understanding of the problem of unidentifiability and the need for clear model selection
3) Trying out different models on a dataset to assess the impact of censoring on reliability and maintenance estimation

Topics Covered:
Right censoring;
Kaplan-Meier estimator;
Copula-Graphical estimator;
Accelerated life modelling;
Random signs model;
LBL model;
Excel spreadsheet implementation to illustrate examples.

When maintenance logs are kept for specific items of equipment they will show a variety of reasons - amongst them critical failure - for taking the equipment off-line. In lifetime analysis such reasons are often called "competing risks" because they are, as it were, in competition with each other to be the cause of taking the equ ipment off line. Unfortunately since the different reasons effectively mask each other (once one has occurred we cannot tell when the next would have occurred), we have to make untestable modelling choices to interpret the data. This workshop looks at several different models that can be used to interpret the data and make predic tions to support decision-making.

Advanced fields of statistical modelling

Date: Friday morning (session 7a)
Trainers: Rainer Göb, Antonio Pievatolo, Fabrizio Ruggeri.

The workshop intends to give an overview of two advanced fields of industrial practice: software quality and reliability, and load forecasting in the electricity market. Both fields are of prominent and rapidly growing economic, financial and technological interest.

The workshop has the following objectives:
1) Statisticians should gain insight into the real industrial problems and perspectives in the respective areas.
2) Relevant problems of statistical data analysis should be identified.
3) The potential of statistical methodology should be reviewed, and challenges for statistical research should be identified.

The workshop is organized in two sessions, one for each subject. Each session is planned to proceed in three steps. 1) An extended report from industrial practice. In software quality, the report will be given by Rix Groenboom from Parasoft company. In load forecasting, the report will be provided by IMATI-CNR, in cooperation with CESI (Centro Elettrotecnico Sperimentale Italiano). 2) A survey of state-of-the-art contributions from statistical literature to the respective areas. An excerpt of material will be provided to the participants beforehand. 3) Round-table discussion and brain-storming on statistical modelling: Are existing approaches hel pful? Do they fit the needs of industrial practice? Which problems should be investigated, which methods are necessary?
The workshop emphasizes problems and perspectives rather than ready-made solutions. Clarifying the proper requirements of industrial practice we hope to provide orientation for research in statistical data analysis. The meeting should be a platform for the exchange of information and ideas between industry and academia, and a proper work-shop rather than a mini conference with diverse presentations. Active cooperation of participants is essential for the success of the meeting.

Wild River DoE Workshop

Date: Friday morning (session 8c)
Trainers: Bernadette Govaerts, Anne De Frenne.

This practical workshop demonstrates statistical concepts with a practical experimental tool which has been used to teach university undergraduate students. This tool has been used for 4 years at UCL University in Belgium. The problem consists of determining the design (height, slope angle, place of the exit quay) of a wild rive r attraction in an amusement park under two conditions: sensation and security maximization.
The experiment is realized on a real 1/200th scale prototype.

Topics covered in the workshop include experimental variability, measurement variability, variance decomposition, Normal distribution – Qqplot Experimental design – repetitions – randomization, simple regression, multiple polynomial regression model validation, prediction - optimization – prediction in terval. Participants will get a practical tool for the introduction of their next experimental design course or introductory course in statistics for experimenters.