ENBIS-13 in Ankara

15 – 19 September 2013 Abstract submission: 5 February – 5 June 2013

My abstracts


The following abstracts have been accepted for this event:

  • Modeling of Response Surfaces with Replicated Measures by Using Fuzzy Least Squares Regression and Switching Fuzzy C-Regression

    Authors: Özlem Türkşen (Ankara University), Nevin Güler (Mugla Sıtkı Koçman University), Ayşen Apaydın (Ankara University)
    Primary area of focus / application: Other: Fuzzy
    Keywords: multi-response experiments, replicated measurement, Fuzzy Least Squares Regression (FLSR), Switching Fuzzy C-Regression (SFCR)
    Submitted at 29-May-2013 22:49 by Özlem Türkşen
    16-Sep-2013 15:45 Modeling of Response Surfaces With Replicated Measures by Using Fuzzy Least Squares Regression and Switching Fuzzy C-Regression
    Multi-response experiments are very common in many real world application problems. The experimental design can be generated by using the replicated measures of responses. One of the main objectives in the multi-response experiments is to estimate the unknown relationship between each response and input variables. In general, classical regression analysis is used for modeling of the responses. However, in most practical problems, the assumptions for regression analysis cannot be satisfied. In this case, alternative modeling techniques such as fuzzy modeling approaches can be used. In this work, Fuzzy Least Squares Regression (FLSR) and Switching Fuzzy C-Regression (SFCR) are applied to the multi-response experiment data set with replicated measurement of responses. The modeling errors are compared for both kind of fuzzy regression models and classical regression model. It is seen that the modeling error of SFCR is the smallest one.
  • Improving Shewhart-type Control Charts for Monitoring Multivariate Gaussian Process Variability: A Unified View from Generalized Variance to Log-likelihood Ratio Statistics

    Authors: Emanuel Pimentel Barbosa (State University of Campinas - UNICAMP), Mario Antonio Gneri (State University of Campinas - UNICAMP), Ariane Meneguetti (State University of Campinas - UNICAMP)
    Primary area of focus / application: Process
    Keywords: Control Charts, Exact Control Limits, Generalized Variance, Log-Likelihood Ratio, Multivariate Process, Variability Monitoring
    Submitted at 29-May-2013 22:53 by Emanuel Pimentel Barbosa
    17-Sep-2013 18:10 Improving Shewhart-type Control Charts for Monitoring Multivariate Gaussian Process Variability: A Unified View from Generalized Variance to Log-likelihood Ratio Statistics
    The sample generalized variance |S| and the log-likelihood ratio LLR, are the two more known and important statistics for monitoring multivariate gaussian processes variability (expressed by the Sigma variance-covariance matrix), but it is a considerable challenge to obtain the exact sampling distributions of these statistics and their quantiles (control charts limits) for finite and small samples. In fact, the explicit expression for the LLR sampling distribution is practically intractable, as recognized by the recent literature.
    Although we can approximate well the |S| sampling distribution quantiles using mathematical tools such as Cornish-Fisher (CF) expansion or Meijer G-functions (as we did in a previous paper), however, this statistic still has an important drawback: it does not detect all sorts of changes in the Sigma matrix (only detect the changes that alter |Sigma|).
    In order to overcome this drawback, a solution is proposed here, initially, based on the inclusion of an auxiliary statistic/chart based on the trace(S), to be used jointly with our previous |S| control chart with CF corrected limits; the motivation for this is clear if we look to the likelihood. The trace(S) statistic quantiles (auxiliary chart limits),
    with S in a proper standardized form, are obtained by heavy simulation, and one practical table of upper-quantiles is provided, for different sample sizes n and process dimensions p.
    Also, in an alternative second version of our proposed procedure, we consider these two statistics (|S| and trace(S)) packed together in just one statistic (chart): the LLR in proper standardized form, which gives not only an unified view of the problem, but also a practical and efficient monitoring tool for Sigma.
    The LLR sampling quantiles (chart limits) are obtained in a similar way, by heavy (millions of samples) simulation using Wishart generators of Matlab or R, and a table of quantiles (upper limits) is provided for different n and p , and usual alpha risks of false alarm of 0.0027 and 0.0020. In order to illustrate the two operating versions of our proposed procedure for improving the |S| and LLR control charts, a couple of examples with real data are provided.
  • Inverse Modeling to Estimate Methane Surface Emission with Optimization and Reduced Models: Application of Waste Landfill Plants

    Authors: Mireille Batton-Hubert (Ecole Nationale supérieure des Mines), Mickael Binois (Ecole Nationale Supérieure des Mines), Espéran Padonou (Ecole Nationale Supérieure des Mines)
    Primary area of focus / application: Modelling
    Keywords: inverse approach, biogas emission, monitoring sampling, optimization, sensitivity analysis, least square regression
    Submitted at 30-May-2013 09:45 by Mireille Batton-Hubert
    18-Sep-2013 09:00 Inverse Modeling to Estimate Methane Surface Emission with Optimization and Reduced Models: Application of Waste Landfill Plants
    The context of this study is to develop a methodology to evaluate the biogas surface emission from the anaerobic fermentation process linking with the waste landfill plants. The operators seek to valorize their emissions, and to reduce their leakage. The biogas is captured through a collection network installed on the core of non-dangerous solid waste sites, which works in depression to collect efficiently the biogas at its emission source. Solid wastes are considered as one of the principal source of greenhouse gas emission. The waste landfills are strongly monitored to supervise the valorization of the biogas and to limit the biogas emissions: it isn’t possible to have a direct sampling of the biogas emission with available sensors (at the source, interface between ground and atmosphere). The evaluation of the emission of biogas is an important challenge for the regulation aspects and risk assessment.
    Consequently we develop an inverse method to estimate the flow (g/m2/s) using the monitoring of the CH4 air quality on the site The inverse approach uses a direct modeling of atmospheric dispersion (between source of emission and receptors) linked with an optimization approach to estimate the average flow of emission. Firstly the approach (method) was validated on simulated results on different scenarii (constant and fluctuant emissions) before being applied to real data. A second phase was dedicated to the evaluation of the uncertainty of the results.
    This inverse approach is already up and running with fair good results. However, the simulation study is quite time consuming. As consequence further works are going on to:
    - Reduce the dimension of the problem by using the correlations between the simulator’s outputs
    - Model the inverse approach simulator by using a design of experiment based on kriging
  • Improving Hospital Billing Processes for Reducing Costs of Billing Errors

    Authors: Erdi Dasdemir (Hacettepe University, Department of Industrial Engineering), Murat Atalay (Hacettepe University, Department of Industrial Engineering), Macit Mete Oğuz (Hacettepe University, Department of Industrial Engineering), Volkan Bilgin (Hacettepe University, Department of Industrial Engineering), Mıurat Caner Testik (Hacettepe University, Department of Industrial Engineering), Guray Soydan (Hacettepe University Hospital)
    Primary area of focus / application: Other: Special Session: Healthcare Systems Engineering
    Keywords: Six Sigma, Process Improvement, Lean Hospital, Medical Billing Process, Billing Errors
    Submitted at 30-May-2013 11:23 by Erdi Dasdemir
    Accepted (view paper)
    17-Sep-2013 09:20 Improving Hospital Billing Processes for Reducing Costs of Billing Errors
    Hospital billing process is a crucial component for hospital management. Due to the complexity of the hospital billing processes, billing errors may result in costly financial losses. In Turkish social security system, Social Security Institution (SGK) provides individuals health insurance and thus it is the most important financer of costs for hospitals. Furthermore, a billing procedure for hospitals is established by SGK to finance the healthcare costs of individuals. Hospitals have to obey these rules to avoid SGK stoppages and fines. In the following, Hacettepe University Hospitals’, where 95 % of the payments are obtained from the SGK, are studied. Nevertheless, there is a huge amount of financial losses from SGK because of the errors occurring during billing process. Here, the aim is to minimize Hacettepe University Hospitals’ financial losses caused by the billing errors. To realize this aim, Lean Six Sigma framework and problem solving methods of statistical quality control are used. The billing process of the hospital is studied first and critical points are determined. After meetings with the hospital IT personnel and hospital administration, some important data, including the past billing errors are retrieved. The main billing errors, their reasons and the financial costs of the errors are aanlyzed with statistical and graphical tools. To solve the problems and remove the errors, work flow and standard operating procedures of the hospital billing process is prepared.
  • Models for Nurse Rostering Problem

    Authors: Banu Yuksel-Ozkaya (Hacettepe University), Murat Caner Testik (Hacettepe University)
    Primary area of focus / application: Other: Health Care
    Keywords: health care, nurse rosters, mathematical model, hard/soft constraint
    Submitted at 30-May-2013 11:47 by Banu Yuksel-Ozkaya
    Accepted (view paper)
    17-Sep-2013 09:00 Models for Nurse Rostering Problem
    Preparation of nurse rosters is a critical task in all health care institutions. It has a direct effect on the quality of health care delivered since these rosters are prepared in order to efficiently utilize the current staff and balance the workload across nurses while satisfying personal preferences as much as possible. Moreover, these rosters usually signal a need for the recruitment of new nurses. In this study, we develop two mathematical models to find optimal and/or implementable nurse rosters on a rolling horizon. In each of these mathematical models, the constraints cover the legal regulations, capacity restrictions, nurse requirements, weekday/weekend/shift preferences and practices. The historical rosters are also included in the constraints to have a workload balance in days and shifts. These constraints are classified as hard and soft constraints where the soft constraints might be relaxed if a feasible roster cannot be found. The two mathematical models differ by their objective function; e.g. in the first mathematical model, we only prepare a feasible roster and hence use a dummy objective function whereas in the second model, we use penalty functions for imbalance of the workload and overtime requirements. The mathematical models are used to propose several nurse rosters and the rosters provided by each model are compared. These models are also embedded in an interface where the personal preferences, capacity restrictions and nurse requirements are provided by the user.
  • Investigating the Performance of Bootstrapping Conic MARS (BCMARS) Method for Large Datasets

    Authors: İnci Batmaz (Middle East Technical University), Ceyda Yazıcı (Middle East Technical University)
    Primary area of focus / application: Mining
    Keywords: CMARS, Random-X Bootstrap, Fixed-X Bootstrap, Wild Bootstrap
    Submitted at 30-May-2013 13:48 by Ceyda YAZICI
    16-Sep-2013 11:40 Investigating the Performance of Bootstrapping Conic MARS (BCMARS) Method for Large Datasets
    Conic Multivariate Adaptive Regression Splines (CMARS), which uses conic quadratic optimization in the backward algorithm of the well-known nonparametric regression modeling method MARS, is a powerful tool for predictive data mining. Extensive studies indicate that CMARS performs better than MARS with respect to several criteria. Nevertheless, CMARS produces models at least as complex as MARS does. To overcome this problem, methods of computational statistics (CS) may be an alternative. In our previous study, a CS technique, called bootstrap (namely Random-X, Fixed-X and Wild Bootstrap) is utilized to decrease the CMARS models’ complexity, and the resultant method is named as BCMARS. The performance of this new approach is also evaluated on small to medium data sets leading to successful results. However, data mining studies are conducted on relatively large data sets. So, to be more realistic, this technique is needed to be evaluated using large data sets as well. In this study, the performance of BCMARS method is investigated for large datasets (i.e. large size and scale) with respect to efficiency, robustness, precision, accuracy, complexity, and stability.