ENBIS Spring Meeting 2019 in Barcelona
13 – 14 June 2019
Abstract submission: 15 January – 20 May 2019
Using fast robust statistical methods to efficiently build a computer vision model for modern, high-throughput product inspection.
13 June 2019, 11:30 – 12:00
- Submitted by
- Bart De Ketelaere
- Bart De Ketelaere (1 Division Mechatronics, Biostatistics and Sensors (MeBioS), KU Leuven), Mia Hubert (Statistics Section, KU Leuven), Peter Rousseeuw (Statistics Section, KU Leuven), Iwein Vranckx (Statistics Section, KU Leuven)
- Computer vision is often used in inspection processes to discriminate between good and bad product, or to separate many fractions that constitute a material stream (e.g. in food or plastics inspection). In order to build a statistical model that can be used to perform the inspection task, there are mainly two practical options that are used. In the first option, the product is manually sorted into the different pure fractions and images of those pure fractions are then used to build a calibration model. In the second option, the mixture of fractions is presented to the camera system as such, and a human operator then labels the different fractions in the acquired images.
Both modi operandi have their important drawbacks. The first way of working faces the disadvantage that a substantial effort is required to manually sort a representative and large enough sample for building the model. In the second approach, the required effort is somewhat lower, but the manual, on-screen labeling of images is prone to errors, so that wrongly labelled data corrupt the obtained database to build a model, potentially leading to inferior classification results. Moreover, modern imaging systems operate in many more wavelength regions as the classical RGB (e.g. by including the NIR region of the electromagnetic spectrum, or by using hyperspectral imaging systems) so that labelling images on screen is becoming cumbersome.
Because of these drawbacks, an approach that is applicable to modern vision systems that produce hard-to-interpret images and avoids the time consuming manual inspection step whilst being robust to mislabeling (or even makes manual labeling obsolete) would have a clear advantage.
In order to be insensitive to mislabeling and noisy data, robust statistical methods could be an appealing choice. However, their computational complexity makes them unsuitable for high-throughput applications such as those in image analysis where millions of samples need to be classified in a fraction of time.
In this case study, we will present a novel approach that is based on newly developed robust yet fast statistical methods that adapt existing techniques so that they work fast on modern computer architectures. By applying these fast and robust techniques, no manual inspection nor image labeling is required, drastically improving the calibration procedure and the classification efficiency of an inspection process.
Return to programme