HCI Research as Problem-Solving
This blog post is a critical review of an article by Oulasvirta & Hornbæk (2016) in which the authors argue for conceptually re-defining human-computer interaction (HCI) as a problem-solving subject. They argue that assessment criteria should evaluate how HCI research output “advances our ability to solve important problems relevant to human use of computers” and what relevant stakeholders can achieve following this output that they could not achieve previously.
To Laudan’s concept of science as empirical and conceptual problem-solving the authors add the category of construction problems to accommodate user and design problems. They highlight that hybrid research problem types are typical of HCI research with all pair-wise combinations represented in the literature. Rather than papers describing the solution to the user problem in the research in prose, they call for a more rigorous definition of the research problem being addressed and evaluation of the solution of the defined research problem.
This “meta-scientific” paper includes a subjective analysis of a small, subjective sample – the 21 “Best Papers” at the 2015 CHI conference chosen by a committee from submissions nominated on the basis of review scores. The authors had to resolve disagreements over individual categorisation of the papers into main problem type with a consensus categorisation. Table 1 simply lists some ideas for evaluation criteria and heuristics for refining ideas that the authors have applied in their own research.
A strong solution to an HCI problem would, in the authors’ words, be “a generalisable and efficient solution to an important, recurring problem”. Are market-based values of efficiency and generalisability appropriate? Would a solution that inefficiently solved a life-changing problem for a small group of users be considered a weak solution? Significance is subjective.
The authors identify a perceived bias in HCI towards novelty that they consider flawed since a solution may be novel without enhancing problem-solving capacity, but they also highlight the lack of novelty in recent interruptions research as a weakness to address. If, as the authors suggest, the large number of novel solutions within the sub-field of interaction techniques is driven by point designs with little conceptual development, is this a problem? There may be no need for conceptual development to successfully address problems in this sub-field. Alternatively, interaction techniques may be too contextual for generalisation of concepts to be useful.
The authors point to a lack of conceptual HCI papers and note that a visionary research paper may describe a solution before the technological capability to produce that solution exists, whilst point designs may have value for users without advancing theory. Given the pace of social and technological change, perhaps conceptual problems become redundant too rapidly to add value to HCI. Empirical HCI research conducted ‘in the wild’ that focuses on the user in a complex and dynamic real-world context may not lend itself to conceptual generalisation. The concept of problem-solving-capacity-building in HCI has potential, but requires further development.
Oulasvirta, A., & Hornbæk, K. (2016). HCI Research as Problem-Solving. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems – CHI’16 (pp. 4956–4967).
Author: Matthew Snape