HCI and the problem-solving problem
What is HCI? And more to the point, what is good quality HCI research? The foregrounding of these questions seems to pinpoint an overriding insecurity that allows for a jockeying for position in terms of finding a unified approach. Oulasvirta and Hornbaek  in their paper HCI Research as Problem-Solving, suggest looking at HCI research from a problem-solving perspective with its roots firmly in Laudan’s  philosophy of science. So, instead of gauging research approach, method or theory, they propose to assess the quality of research in terms of its problem-solving capacity – its ability to provide a solution, widely transferable and with a high level of confidence in its validity, to crucial aspects of HCI problems.
Adapting Laudan’s concept to the field of HCI, the paper attempts to position all HCI research into three proposed problem types: empirical, conceptual and constructive. Nice. It’s always good to have a framework, a ‘theory of everything’. But with a field so complex, can nebulous concepts be forced to fit into boxes so neatly labelled?
Using this concept, the authors contribute to the notion of the lack of motor themes in HCI research by identifying a gap in conceptual research that enables theory development. They urge for more conceptual research in the HCI field. Although this may be a valid point that HCI researchers need to address, it’s worth noting that this conclusion was made through the analysis of 21 Best Papers from the proceedings of the 2015 CHI conference; something of a stretch to then take this as representing the field as a whole.
So, is the problem-solving approach the answer to defining HCI research?
Proposed aspects of problem-solving capacity e.g. the significance of the problem addressed, the effectiveness and confidence of the proposed solution, seem to be important, and if critically reviewed, could potentially improve the quality of research. But the issue that won’t leave my mind after reading the paper is whether this approach is widely applicable to all HCI research, as claimed by the authors, or whether it’s a simple SWOT analysis-like tool, in their words a ‘thinking tool’ that can be employed for self-assessment by some individual studies only.
The other major issue with this approach? The danger that it can dismiss or label as ‘insignificant’ the work that tries to ‘define the problem scene’ rather than provide a solution to the problem. For example, Encinas et al.  in their study of teen shoplifters explain that a wide range of design research aims to characterise the problem rather than solve it. High number of researchers employ design practices such as Critical Design in order to critically evaluate common assumptions rather than provide a solution [ibid].
Although the authors try to address the issue of ‘solutionism’ in their approach by defining a problem, this just doesn’t seem convincing. Dividing the whole scope of HCI research into three types of problem seems over simplistic, failing to consider the complexity and multidisciplinary nature of the field. Put simply, are we able to provide effective solutions to problems when we might not yet fully understand what they are?
 Encinas Enrique, Blythe Mark, Lawson Shaun, Vines John, Wallace Jayne and Briggs Pamela. 2018. Making Problems in Design Research: The Case of Teen Shoplifters on Tumblr.In: CHI 2018 – 2018 ACM Conference on Human Factors in Computing Systems.
 Larry Laudan. 1978. Progress and its problems: Towards a theory of scientific growth. University of California Press.
 Antti Oulasvirta and Kasper Hornbaek. 2016. HCI Research As Problem-Solving. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 4956–4967.
Author: Irina Pavlovskaya