ENBIS-16 in Sheffield

11 – 15 September 2016; Sheffield Abstract submission: 20 March – 4 July 2016

My abstracts

 

The following abstracts have been accepted for this event:

  • Using DOE to Reduce Variation in Processes and Product Design

    Authors: Bryan Dodson (SKF Group Six Sigma), Rene Klerx (SKF Group Six Sigma), Olivier Joubert (SKF Group Six Sigma)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Six Sigma
    Keywords: Design of Experiments, Robust design, Taguchi method, Variation reduction, Sensitivity analysis
    Submitted at 29-Apr-2016 13:35 by Rene Klerx
    Accepted (view paper)
    12-Sep-2016 11:30 Using DOE to Reduce Variation in Processes and Product Design
    The topic of robust design is frequently attributed to Genichi Taguchi, but the concept of exploiting the non-linearity in systems to find settings that provide outputs that are insensitive to input variation has been considered since the beginning of mathematics. Some of the alternatives to the Taguchi method are: variance estimators based on moment functions, high-dimensional model representations (HDMR), and Fourier amplitude sensitivity test (FAST). This presentation will compare variance estimators based on moment functions to the Taguchi method in terms of accuracy of the results, and the efficiency with respect to the number of experimental trials. The variance estimators will be applied to 2nd order polynomials obtained from DOE results.
  • Data Mining for Social and Economic Benefits - An Example from an Assistive Technology Company

    Authors: Sophie Whitfield (Newcastle University), Shirley Coleman (Newcastle University), Joanna Berry (Durham University)
    Primary area of focus / application: Mining
    Secondary area of focus / application: Business
    Keywords: Data interrogation, Interactive visualisation, Business models, Monetisation
    Submitted at 29-Apr-2016 15:14 by Sophie Whitfield
    Accepted (view paper)
    13-Sep-2016 09:20 Data Mining for Social and Economic Benefits - An Example from an Assistive Technology Company
    As people live longer, businesses and research communities are looking for ways to increase the period of self-sufficiency which people enjoy, and ways to improve their quality of life.

    Rapid expansion in the 21st century of technologies to capture and store information, enables companies to have enormous amounts of data at their disposal. Increasingly, companies are aware that their data can provide them with business advantages: their company data can be monetised and gaps in the market can be uncovered. Newcastle and Durham Universities are working in a Knowledge Transfer Partnership (KTP) with a small to medium enterprise (SME) called ADL Smartcare Limited who, over the last 13+ years has developed an expert system which elicits individualised information about a person’s capabilities, physical environment and Activity of Daily Living (ADL) needs and offers bespoke solutions which satisfy all the conditions of suitability and safety.

    Data mining techniques, such as market basket analysis and decision trees are applied to quantitative and qualitative data from ADL Smartcare’s 70,000+ assessments to determine the best business models and financial structures to monetise the data.

    Business models are developed to allow interactive interrogation of the dataset; customised reports, bespoke data visualisation and fact sheets.

    The insight can inform decisions, encourage innovation and positively impact older people, helping them stay independent at home for longer, with considerable economic and social benefits.
  • Factor Screening in Non-Regular Designs with Restriction on Parameters

    Authors: Shahrukh Hussain (Department of Mathematics, Norwegian University of Science and Technology), John Tyssedal (Department of Mathematics, Norwegian University of Science and Technology)
    Primary area of focus / application: Design and analysis of experiments
    Secondary area of focus / application: Design and analysis of experiments
    Keywords: Non regular designs, Factor screening, Plackett-Burman designs, Restriction on parameters
    Submitted at 29-Apr-2016 15:26 by Shahrukh Hussain
    Accepted
    13-Sep-2016 10:10 Factor Screening in Non-Regular Designs with Restriction on Parameters
    Factor screening plays an important role in industrial experimentation where we usually have large number of factors with their interactions and small number of runs. The goal is to find best subset of predictors that have a great influence on response of interest. In this paper we proposed an efficient variable selection algorithm to screen active effects in non-regular designs by using their projection properties. The proposed method is inspired by (Tyssedal & Hussain, 2016) to screen active factors. On each projection model forward regression strategy with change in coefficient of determination as test statistic proposed by (Tyssedal & Hussain, 2016) to screen active factors is conducted by restricting the number of terms to enter the model. Then gradually reducing the number of possible subsets of active factors. Empirical comparison with traditional forward selection approaches used in projection model is done. Our simulation study shows that proposed method compares favorably well with traditional forward selection approaches when some restrictions are imposed.
  • Approximate Uncertainty Modelling with Vine Copulas

    Authors: Kevin Wilson (Newcastle University), Tim Bedford (University of Strathclyde), Alireza Daneshkhah (Cranfield University)
    Primary area of focus / application: Modelling
    Secondary area of focus / application: Finance
    Keywords: Copula, Entropy, Information, Risk modelling, Vine
    Submitted at 29-Apr-2016 16:22 by Kevin Wilson
    Accepted
    13-Sep-2016 10:30 Approximate Uncertainty Modelling with Vine Copulas
    Many applications require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modelling joint uncertainties with probability distributions. This talk focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica and others on vines as a way of constructing higher dimensional distributions which do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. We discuss a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. This result is further operationalised by showing how minimum information copulas can be used to provide parametric classes of copulas which have such good levels of approximation. We extend previous approaches using vines by considering non-constant conditional dependencies which are particularly relevant in financial risk modelling. We discuss how such models may be quantified, in terms of expert judgement or by fitting data, and illustrate the approach by modelling two financial data and component lifetimes.
  • Ethical Pitfalls of Big Data

    Authors: Joanna Berry (Durham University Business School), Shirley Coleman (Newcastle University)
    Primary area of focus / application: Education & Thinking
    Secondary area of focus / application: Business
    Keywords: Profit, Business models, Internet-of-things, Truth, Trust
    Submitted at 29-Apr-2016 16:32 by Joanna Berry
    Accepted
    12-Sep-2016 10:00 Ethical Pitfalls of Big Data
    An internet minute sees millions of emails, hours of music and hundreds of thousands of uses of social media. The ‘internet of things’, within which every device is connected to every other device and every human being, steadily increases the network of networks which supports and informs all our lives.

    There are clearly opportunities for monetising insight derived from generated and wholly-owned company big data but what are the challenges? Big data is a relatively new term applying to data acquired by increasingly sophisticated and easily available technologies.

    Big data has three fundamental elements: the velocity at which data can be gathered and processes (in real or nearly real time, and in periodic or batch formats), the variety of data that can be collected (across all social media, video, audio, SMS/MMS and other applications) and the volume of data which can be not only gathered, but also stored…from kilobytes to petabytes and beyond, through the intelligent application of cloud based storage functionality.

    Two other important issues are also essential; veracity of the data being interrogated and the value of the insight derived. These could also be described as the issues of truth and trust. The ability to monetise insight derived from big data will be an increasingly significant part of many companies’ business models. However it is critical to ensure that the legal and ethical implications of making a profit from personal information are factored into business model development and this paper will focus on these two issues.
  • Reassessment of Calibration and Measurement Capabilities based on Key Comparison Results

    Authors: Katsuhiro Shirono (National Institute of Advanced Industrial Science and Technology), Maurice Cox (National Physical Laboratory)
    Primary area of focus / application: Metrology & measurement systems analysis
    Keywords: Calibration and measurement capability (CMC), Key comparison (KC), Measurement uncertainty, Bayesian statistics
    Submitted at 29-Apr-2016 18:15 by Katsuhiro Shirono
    Accepted (view paper)
    12-Sep-2016 11:30 Reassessment of Calibration and Measurement Capabilities based on Key Comparison Results
    The uncertainty of a calibration and measurement capability (CMC) is the expanded measurement uncertainty available to customers under normal conditions of measurement. We consider CMCs for which there is a supporting key comparison (KC), that is, where the KC and the CMCs relate to the same or closely similar measurands. When laboratories have performed unsatisfactorily in a KC, such laboratories may be required to set new values for their CMC uncertainties. These new values are set following a review process supervised by the appropriate working group in the relevant Consultative Committee of the CIPM. Although the review process supports the reassessment of the CMC uncertainties from a practical perspective, it is not generally carried out on a statistical basis.
    We consider statistical methods to determine appropriate values for CMC uncertainties in such cases. In this study, an extension of several existing methods for analysing KC data are applied for this purpose, and a new method in which Bayesian statistics are employed is proposed. It is found that some of the methods provide CMC uncertainties that reasonably relate to the degrees of equivalence of the KC results. These methods could be acceptable to reassess CMC uncertainties through a supporting KC.