Professor of Software Engineering | Chalmers
Testing of Machine Learning Systems – the importance of “lagom” surprising inputs
Testing of machine learning (ML) components, such as deep neural nets, is not only about correctness and accuracy; there is a large number of quality properties that has to be ensured. While research on how to perform these different forms of testing is still immature it is growing at a tremendous pace. In this talk, I’ll give an overview of recent results on testing ML models and discuss how it differs from testing normal software. I will exemplify with recent work on how to find adequately (“lagom”) surprising test inputs, that are not random or look like noise, but rather that are realistic. While the current focus of research is mainly on neural nets I’ll also discuss if and how this might be generalised to other types of machine learning models.
Robert Feldt is a researcher and teacher with a passion for software and for augmenting humans with AI and Artificial Creativity/Innovation. He is a professor at Chalmers University of Technology in Gothenburg and frequently consults for companies in both Europe and Asia. He has broad interests spanning from human factors to hardcore automation and applied AI&statistics and works on software testing and quality, as well as human-centred (behavioural) software engineering.