Situated Action – So What?
According to Suchman’s (2006) idea of situated action, plans are not the be-all and end-all of human behaviour. Whilst we may make plans, and use them as tools to inform our actions, no plan we can (practically) create could possibly account for every situation. There is always the chance that the course we chart will take us into the proverbial storm, where the situation is too chaotic to anticipate and we must rely on learned skill, instinct, and improvisation to see us through.
This has considerable implications for how humans interact with computers. Computers are typically programmed to respond to a particular input in a particular way: if this, then that. A computer program is a form of plan which the machine has no choice but to adhere to rigidly.
But if we consider human action as situated, we can see that this will always make an interaction between a human and a computer an uneven exchange. The computer does not comprehend context; it merely has ever more detailed plans. It is tempting to see this as an inevitability of HCI; to accept that humans will always have to shoulder the interpretive burden, and to attribute breakdowns in communication to ‘user error’. If an interaction has gone wrong, and the computer is incapable of improving, we must make the human better able to pick up the slack. If the human cannot meet the requirements imposed upon them by the interaction, then that is not ‘our problem’. However this culture of ‘user-blaming’ is problematic, and marketing a product on the basis that if you cannot use it effectively you are somehow deficient is unlikely to prove successful.
A technologist may at this point be prompted to ask ‘well what am I supposed to do then?’ Suchman is posing a problem that seems on the face of it to be unsolvable. If one sees the value in ideas in terms of the tangible outcomes they produce, it can be hard to see why this particular idea is worthwhile.
To people who feel this way, I have this to say: considering problems makes our designs better, even if we cannot (yet) solve them. There may well come a point when we are capable of producing systems that act in a meaningfully situated way (neural networks can already behave in a manner intriguingly similar to situated navigation), but until that time we will design more effectively if we keep in mind that we cannot plan for everything, and aim to reduce the interpretive burden that users must shoulder.
What can we do with Suchman’s idea? We can involve users and other stakeholders in our design process. We can prototype and iterate. We can be considerate of culture and context, and aware of our own limitations and preconceptions as designers. In short, when faced with a problem beyond our current capacity to solve, we can do the best we are able to.
Suchman, L. (2006) Human-Machine Reconfigurations, Cambridge University Press
Author: Adam Parnaby